GNUPlot Processing Flow
Data acquisition server will push data to data analysis server every 1/2 hour. In the process, it will create a text file for the graphing server to search for to see if there is new data.
The text file will contain the names of the locations and values that graphs need to be generated for.
This list will provide a loop for the script to progress through, injecting the appropriate information into the creation of the graphs when needed.
Additionally, the web-pages need to be created on the fly to handle them. What I'm thinking is just have a template, full of variables. These variables will be passed to the page from the links on the home page in a GET statement.
The current images will be stored on the webserver. The archived images will be maintained on the data analysis server. --- I'm going to have to build an archive server to store data > 1 year, but that is way in the future.