The Google Data API is designed to make it easy to get data from Google and use it in your application. I did not find it easy. In fact, I really haven't figured it out yet.
What I want to do is to archive my web log, eventually in an automated way. I'd like to have html for the individual posts, preferably human-readable html similar to what I originally typed in the the editor window. The Atom feed would be perfect since the format is compact. Then I could parse it somehow and go grab the original images from blogger at full resolution. This is important because I haven't kept copies of those images. They're usually just screenshots that I typically discard.
Perhaps I would try to organize everything into directories, say individual posts grouped by months or by topic, and maybe change the image links so they point to the local copies of the images. Ideas:
• Export from Blogger
the XML isn't displayed properly by Safari
• the Atom feed looks nice, but I haven't figured out how to get all the posts at once. This looks like it works, but then you see it's been broken into two pages:
http://telliott99.blogspot.com/feeds/posts/default?max-results=500
• The standard URL does work
http://telliott99.blogspot.com/search?max-results=500
I can save either of these from Safari as a Web Archive (though that has changed---it is actually in a special Apple format). I can grab the source and save it as HTML, but it's pretty ugly HTML.
Google Data API
This API is designed to make it easy to get data from Google and use it in your application. I did not find it easy but I think I got it working a little bit---baby steps. I would love to find tutorials for this stuff.
I grabbed the Python client library for the Blogger data API and installed as usual. I ran:
./tests/run_data_tests.py
and everything looked fine. I didn't run the sample
BloggerExample.py
because the version they have requires a log-in and I didn't want to chance screwing up the blog. By digging in the source to try to change that I just got lost. Eventually I found an introductory video at youtube, but it doesn't go far enough. From the video I learned how to do is this:After a deprecation warning for use of the sha module, I get:
I suppose the numbers are ids for the blog and individual posts
So now we need to go farther... After looking more carefully (patiently) at the instructions here, I see that what I'm supposed to do this:
http://www.blogger.com/feeds/profileID/blogs
Where the profileID is obtained from the URL displayed when I click to display my profile.
This just prints the number of blogs I have!
By reading more in the instructions, I finally got some real data:
Output from the dir call includes: 'author', 'category', 'content', 'contributor', 'control', 'extension_attributes', 'extension_elements', 'id', 'link', 'published', 'rights', 'source', 'summary', 'text', 'title', 'updated'.
What I need to do:
• Figure out the URL to send to request a particular entry
• Figure out how to work with the xml data format I'm getting