<– Part 2
It would be short work to grab the current observations from the local station’s file, then grab the full projected forecast from the Zone’s XML formatted data file (example for Bremerton). I checked the full forecast XML, and unfortunately it does not include the local current conditions. In the grand scheme, this is not a big deal since I’ll need to grab three forecast files already. One more file for current observations is not that much more of a burden.
This section contains the Maximum Forecast Temperatures, covering outlook:
<temperature type="minimum" units="Fahrenheit" time-layout="k-p24h-n7-1"> <name>Daily Minimum Temperature</name> <value>40</value> <value>45</value> <value>43</value> <value>44</value> <value>43</value> <value>43</value> <value>42</value> </temperature>
This section contains the Minimum Forecast Temperatures, covering outlook:
<temperature type="maximum" units="Fahrenheit" time-layout="k-p24h-n7-2"> <name>Daily Maximum Temperature</name> <value>59</value> <value>61</value> <value>60</value> <value>61</value> <value>61</value> <value>60</value> <value>60</value> </temperature>
Also in the file is the projected percentage of precipitation:
<probability-of-precipitation type="12 hour" units="percent" time-layout="k-p12h-n14-1"> <name>12 Hourly Probability of Precipitation</name> <value xsi:nil="true"/> <value xsi:nil="true"/> <value xsi:nil="true"/> <value xsi:nil="true"/> <value xsi:nil="true"/> <value>30</value> <value>30</value> <value xsi:nil="true"/> <value xsi:nil="true"/> <value xsi:nil="true"/> <value xsi:nil="true"/> <value xsi:nil="true"/> <value xsi:nil="true"/> <value xsi:nil="true"/> </probability-of-precipitation>
The XML also is kind enough to include the image names. Based on this, the actual visually rendered page is a style sheet applied to the XML data, providing a human readable web page <i>(something I need to get more familiar with)</i>.
<conditions-icon type="forecast-NWS" time-layout="k-p12h-n14-1"> <name>Conditions Icon</name> <icon-link>http://www.nws.noaa.gov/weather/images/fcicons/nbkn.jpg</icon-link> <icon-link>http://www.nws.noaa.gov/weather/images/fcicons/sct.jpg</icon-link> <icon-link>http://www.nws.noaa.gov/weather/images/fcicons/nsct.jpg</icon-link> <icon-link>http://www.nws.noaa.gov/weather/images/fcicons/bkn.jpg</icon-link> <icon-link>http://www.nws.noaa.gov/weather/images/fcicons/nbkn.jpg</icon-link> <icon-link>http://www.nws.noaa.gov/weather/images/fcicons/shra30.jpg</icon-link> <icon-link>http://www.nws.noaa.gov/weather/images/fcicons/nshra30.jpg</icon-link> <icon-link>http://www.nws.noaa.gov/weather/images/fcicons/shra.jpg</icon-link> <icon-link>http://www.nws.noaa.gov/weather/images/fcicons/nshra.jpg</icon-link> <icon-link>http://www.nws.noaa.gov/weather/images/fcicons/shra.jpg</icon-link> <icon-link>http://www.nws.noaa.gov/weather/images/fcicons/nshra.jpg</icon-link> <icon-link>http://www.nws.noaa.gov/weather/images/fcicons/shra.jpg</icon-link> <icon-link>http://www.nws.noaa.gov/weather/images/fcicons/nshra.jpg</icon-link> <icon-link>http://www.nws.noaa.gov/weather/images/fcicons/shra.jpg</icon-link> </conditions-icon>
Design and coding of the file downloads, processing and construction of the status file is the next step. One thought I have, is to download and store locally, all the images. This will keep my page off of their webserver’s request log. External includes of images can be a burden on a remote server, if you start to abuse it. No sense in being a bad netizen. The question that I need to answer is, <i>CAN I</i> get a list of all the potential images (and there are <i>many</i>) and automate the download to system I plan to run this little product upon.
Automation
Automation of the XML downloading and parsing, will be handled by PERL (at least in this first tool, it may be converted to JAVA on a second iteration, once I’ve performed usability testing of this first iteration).
Additional binary libraries are required first:
apt-get install libxml2-dev
So, here is the skinny. First, have to move into the location where CPAN unravels the packages:
cd /root/.cpan/build
Next, move into the directory for XML::LibXML and manually build with these steps:
cd XML-LibXML-1.70-OucP9U perl Makefile.PL make make install
Next build the XML::Atom package:
cd XML-Atom-0.37-pcVvAG/ perl Makefile.PL make make install
Next, a few PERL modules must first be installed. NOTE: the XML::SAX
modules are suggested, to provide a big boost in XML parsing performance. You might be surprised at how much processing power the typical XML parsing packages require. XML::SAX
and XML::SAX::Expat
provide a better code for XML parsing.
cpan LWP::UserAgent cpan HTML::TokeParse cpan XML::Feed cpan XML::SAX cpan XML::SAX::Expat cpan Data::Dumper cpan XML::Feed cpan XML::Atom
NOTE:
As hard as I tried, CPAN would not install XML::LibXML
nor XML::Atom
. So, I had to get down into the muck and manually build the packages. The trick was getting the libxml2 package talked about earlier.
One critical required step, if you are going to write and use custom PERL libraries, you need to add the following line to a shell initialization file. I’ve been using bash
, for lack of any reason to use other shells. Thus, I added this line to my ~/.bash_profile
export PERL5LIB=$HOME/plib
Once that is done, you’ll need to source
the updated config file, or logout/login to gain the benefits of the change.