<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.jayscafe.net/index.php?action=history&amp;feed=atom&amp;title=GNUPlot_Processing_Flow</id>
	<title>GNUPlot Processing Flow - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.jayscafe.net/index.php?action=history&amp;feed=atom&amp;title=GNUPlot_Processing_Flow"/>
	<link rel="alternate" type="text/html" href="https://wiki.jayscafe.net/index.php?title=GNUPlot_Processing_Flow&amp;action=history"/>
	<updated>2026-04-08T12:37:16Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.37.2</generator>
	<entry>
		<id>https://wiki.jayscafe.net/index.php?title=GNUPlot_Processing_Flow&amp;diff=2&amp;oldid=prev</id>
		<title>Jayctheriot: 24-hour, week, month, quarter and yearly graphs of OpenWeatherMap data information flow</title>
		<link rel="alternate" type="text/html" href="https://wiki.jayscafe.net/index.php?title=GNUPlot_Processing_Flow&amp;diff=2&amp;oldid=prev"/>
		<updated>2018-01-30T10:41:18Z</updated>

		<summary type="html">&lt;p&gt;24-hour, week, month, quarter and yearly graphs of OpenWeatherMap data information flow&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;Data acquisition server will push data to data analysis server every 1/2 hour.&lt;br /&gt;
In the process, it will create a text file for the graphing server to search for to see if there is new data.&lt;br /&gt;
&lt;br /&gt;
The text file will contain the names of the locations and values that graphs need to be generated for.&lt;br /&gt;
&lt;br /&gt;
This list will provide a loop for the script to progress through, injecting the appropriate information into the creation of the graphs when needed.&lt;br /&gt;
&lt;br /&gt;
Additionally, the web-pages need to be created on the fly to handle them.  What I'm thinking is just have a template, full of variables.  These variables will be passed to the page from the links on the home page in a GET statement.&lt;br /&gt;
&lt;br /&gt;
The current images will be stored on the webserver.  The archived images will be maintained on the data analysis server. --- I'm going to have to build an archive server to store data &amp;gt; 1 year, but that is way in the future.&lt;/div&gt;</summary>
		<author><name>Jayctheriot</name></author>
	</entry>
</feed>