Blog - Unity Behind Diversity

Searching for beauty in the dissonance

SOLUTION: gPodder 2.20.3 on N900: database disk image malformed

After some strange behaviour in gPodder 2.20.3 yesterday on my N900 (not responding to episode actions), I quit gPodder and tried to start it up again, but it would crash during startup everytime with an error about “database disk image malformed” from line 316 of dbsqlite.py on the query: “SELECT COUNT(*), state, played FROM episodes GROUP BY state, played”.

First, I opened up the sqlite database directly:
sqlite3 ~/.config/gpodder/database.sqlite

I could run that query and others no problem.

However, I found this guide on repairing a corrupt sqlite database, I ran the following integrity check command and it returned a couple errors along with the “database disk image malformed” message:
sqlite> pragma integrity_check;

So I followed the instructions from spiceworks, dumped my database to file and reloaded it into a new database:
cd ~/.config/gpodder/
echo .dump | sqlite3 database.sqlite > gpodder.sql # generate dump file
mv database.sqlite database.sqlite.bak # backup original database
sqlite3 -init gpodder.sql database.sqlite # initialize a new database from the dump file

And, voila, gPodder is working again.

Creative Commons Attribution-ShareAlike 4.0 International Permalink | Post a Comment

Amhrán Mhuínse

I fell in love with this song 10 years ago in Ireland (a different but similar recording): Amhrán Mhuínse (The Song of Muínis)

I fell in love with the music — I never understood the lyrics. By chance, I scrolled past it in a playlist today, and it spoke to my heart, so I put it on repeat and decided to spend some time searching for an English translation.

*hand to heart*

If I were three leagues out at sea or on mountains far from home,
Without any living thing near me but the green fern and the heather,
The snow being blown down on me, and the wind snatching it off again,
And I were to be talking to my fair Taimín and I would not find the night long.

Dear Virgin Mary, what will I do, this winter is coming on cold.
And, dear Virgin Mary, what will this house do and all that are in it?
Wasn’t it young, my darling, that you went, during a grand time,
At a time when the cuckoo was playing a tune and every green leaf was growing?

If I have my children home with me the night that I will die,
They will wake me in mighty style three nights and three days;
There will be fine clay pipes and kegs that are full,
And there will be three mountainy women to keen me when I’m laid out.

And cut my coffin out for me, from the choicest brightest boards;
And if Seán Hynes is in Muínis, let it be made by his hand.
Let my cap and my ribbon be inside in it, and be placed stylishly on my head,
And Big Paudeen will take me to Muínis for rough will be the day.

And as I go west by Inse Ghainimh, let the flag be on the mast.
Oh, do not bury me in Leitir Calaidh, for it’s not where my people are,
But bring me west to Muínis, to the place where I will be mourned aloud;
The lights will be on the dunes, and I will not be lonely there.

Creative Commons Attribution-ShareAlike 4.0 International Permalink | Comments (6)

Degooglifying (Part IV): Calendar

This post is part of a series in which I am detailing my move away from centralized, proprietary network services. Previous posts in this series: email, feed reader, search.

Finding a replacement for Google Calendar has been one of the most difficult steps so far in my degooglification process, but in the end I’ve found a bunch of great, libre alternatives.

Beyond the basic criteria for free network services, I was looking for:

  • desktop, web and mobile clients
  • offline access, especially for mobile
  • multiple calendars
  • access controls for sharing calendars
  • ability to subscribe and share calendars with other servers
  • applicable for business and personal use

First Attempt: SyncML using SyncEvolution and Funambol

I started with SyncML, an open standard for syncing calendar and contact data. SyncEvolution is a great SyncML client, with both GUI and command line tools available for GNOME and Maemo GNU/Linux, and Funambol is an AGPL SyncML server, with an Android client.

I setup Funambol and migrated from Google Calendar in July 2011, using SyncEvolution on my N900 and my laptop, but there were a bunch of problems. It was unstable around the edges, not handling deletes very well, and sometimes choking and failing with certain characters ( ” maybe?) in event titles. When I tried to switch my parents over in Android, it was a nightmare trying to figure out where the sync was failing, and they eventually moved to Google Calendar instead. SyncEvolution only syncs with Evolution on the desktop; there’s no mature SyncML solution for Lightning. The Funambol free software edition felt like a bit of an afterthought as well, with poor or outdated documentation, and a crippled, totally useless “demo” web UI. There was no calendar sharing or access controls either. Plus, Funambol is a pretty heavy application, targeted at mobile carriers, not someone who wants to run it from their living room.

SyncML with Funambol and SyncEvolution allowed me to leave Google Calendar behind, but I ended up living off my mobile calendar, using Funambol essentially as a backup service. I had no web client, no shared calendars, and eventually stopped syncing to Evolution on my laptop. Part of the problem was Funambol, but part of the problem was also SyncML, which seems to be a clunky standard, designed for an older paradigm of syncing with offline mobile clients.

I quickly realized that CalDAV was the better open standard.

The Solution: CalDAV

CalDAV is an extension of WebDAV, an internet standard for remote access to calendar data. It’s a more modern standard that SyncML — though SyncML does have better support on older mobile devices. (There’s also CardDAV for contacts, but I’ll leave that for a future post.)

Servers: SOGo, ownCloud or Radicale

However, there are a ton of CalDAV servers.

Here are my favourites so far:

Application Pros Cons
SOGo
[demo]
Works with anything via connectors; well-integrated with Thunderbird/Lightning, and web UI modelled after Lightning; Ubuntu/Debian repos UI isn’t super pretty; comes with a webmail client I don’t want; heavy, took some effort to install (e.g. made a custom MySQL user auth table, in the absence of an LDAP server)
ownCloud
[demo]
Very alive; support for contacts, photos, music, etc.; Ubuntu/Debian repos Newer (immature when I first tried in 2011); seemed more of a personal than business tool, but that may have changed.This has changed. As of 2015, ownCloud is strong, mature and thriving.
Radicale Simple, elegant, light-weight For sysadmins: no UI

I tried a few others, but I wouldn’t recommend them:

  • Funambol CalDAV connector: In theory, best of both worlds with SyncML and CalDAV support, but I couldn’t figure out if there was an updated stable version, how to get it working with Funambol, etc., and this would still carry the Funambol issues and lack a web client or CardDAV support
  • DAViCal: seemed robust, but also onerous to configure and administer, and the web UI is only for administration (no web calendar client). This could work, but it just felt a bit onerous to use.
  • Update: lnxwalt mentions PHP Web Calendar, which I’d missed. I tried the online demo, but it looks/feels pretty ~2005: awkward and not fully-featured UI, focus on old standares like iCal (rather than true CalDAV?), with a CVS wishlist that includes SyncML support and a Java servlet, and import/export from Palm as a key feature, etc.

Others I didn’t bother to try:

  • Zimbra: Seemed like heavy-duty Groupware with a bunch of things I didn’t need or want — though could make sense if that’s what you’re looking for.
  • Horde (Kronolith): I did try Horde, but using the old interface a few years back. That UI felt 10+ years old, but it’s since undergone a complete overhaul and I haven’t looked at it since. Also, a groupware suite, which may be a plus or a minus. However, I don’t think it uses real CalDAV
  • Bedework: Java, seems heavy, without any obvious benefits or easy packaging
  • Apple Calendar and Contacts Server: while Apache licensed, it really doesn’t seem to be designed to enable other people to run the software — I didn’t get very far looking into this
  • Update: Jean Baptiste Favre has a great tutorial on implementing SabreDAV, a PHP library which implements WebDAV and its CalDAV and CardDAV extensions, if you want to build your own solution.

I’m using SOGo. Though, that’s partially because it was the most comprehensive solution that I had working at the time when my wife went back to work after maternity leave and we needed sharable calendars again to coordinate scheduling for childcare. But SOGo also has some nice, more advanced features, like the ability to subscribe to remote CalDAV feeds on other servers through the web UI.

I’m pretty happy with SOGo, though I’ll certainly be revisiting ownCloud and Radicale at some point. When I first tried ownCloud, it was immature, but it’s since grown a lot. And when I first tried Radicale, it was using a “strange” ACL model, but that’s been overhauled in 0.8. DAViCal was working, though it wasn’t a pleasure to configure, and I’m sure there are a few other workable servers I passed over.

I highly recommend ownCloud. At the end of 2014, I switched from SOGo to ownCloud, and have not looked back. ownCloud has a better web UI, has a much stronger and vibrant community, is alive and growing, is much easier to host (e.g. repos for popular GNU/Linux distributions, and GLAMP stack), and is useful for more than just CalDAV (I’m already using it for file syncronization and CardDAV as well).

Desktop Client: Lightning

Since I’m a Thunderbird/IceDove user, Lightning is the obvious choice for a desktop client. We also use Thunderbird at the office and in my family. Lightning also supports Google Calendar, so just like with degooglifying email, you can switch your frontend and backend in separate steps.

The Evolution calendar is pretty awkward. I tried it when I was using SyncML, but it didn’t last long. There are other options too.

Web Client: SOGo, ownCloud or CalDavZap

I’d prefer a server with a web client, like SOGo or ownCloud, but for a standalone CalDAV web client (e.g. to pair with Radicale or DAViCal), CalDavZap [demo] seems pretty cool.

Mobile Client: SyncEvolution or aCal

Maemo: The reason I spent so much time on SyncML was that there was no CalDAV client for Maemo, but now SyncEvolution supports CalDAV/CardDAV sync!

Android: Use Davdroid. It syncs CalDAV and CardDAV to native AOSP storage.

aCal is an Android CalDAV client, and a replacement for the proprietary Google calendar application. It works really well, but the UI feels really awkward and non-native. [Update: There’s also CalDAV-Sync, which I’d skipped over because it’s proprietary, but maiki pointed out that the developer at least intends to open source it eventually. I’m not sure if the Android Calendar is free software or one of the proprietary “Google experience” apps?] Both sync to local storage for offline support.

Conclusion

It took me a long time to figure this out, especially since I was focused on SyncML at first, but I’ve finally fully replaced Google Calendar with CalDAV solutions. SOGo, ownCloud and Radicale are all great CalDAV servers. SOGo and ownCloud have built-in web clients, but there’s also CalDavZap as a standalone web client. Lightning is the obvious cross-platform desktop CalDAV client of choice, and SyncEvolution and aCalDavdroid provide mobile clients for Maemo and Android.

The good news is there are plenty of options. As a bonus, most of these come with CardDAV support (which will be the focus of a future post), and ownCloud handles photos, music, and other files as well, so you may get more than just a calendar. Or, if it’s just a calendar you want, light-weight solutions like Radicale and CalDavZap give you just that.

I’m just thrilled to have finally figured this out.

Creative Commons Attribution-ShareAlike 4.0 International Permalink | Comments (20)

HOWTO: CalDAV/CardDAV Sync from N900 to SOGo using SyncEvolution

When I moved to Maemo in 2010, I was using Google Calendar. I setup a sync via Exchange and eventually Erminig, which allowed me to sync my wife’s Google calendar too. But, when I started degooglifying and moving to free network services, I left Google Calendar for Funambol, using SyncEvolution as a Maemo SyncML client.

This was far from ideal: we lost shared calendars, there was no web UI, and desktop SyncML options were lacking. I quickly realized that CalDAV would be the better long-term option. I choose SOGo as my CalDAV server, but I couldn’t find a CalDAV client for the N900. (I tried the Funambol SOGo Connector. but just couldn’t figure it out.)

I’d just about given up on a comprehensive sync solution in Maemo… until I hit the jackpot a few days ago and stumbled upon a post by Thomas Tanghus on a CalDAV/CardDAV sync from the N900 to ownCloud using SyncEvolution.

It looks like SyncEvolution gained CalDAV/CardDAV support in version 1.2 — the N900 has a CalDAV client!

CalDAV/CardDAV Sync using SyncEvolution

Thomas’ instructions were for ownCloud, but they work for any CalDAV/CardDAV server. I only ran into two issues, I think because I’d been using SyncEvolution pre-1.2. The steps included here are 90% from Thomas, with those two additions.

Reinstallation

First, I ran into the same problem as Wolfgang: the SyncEvolution WebDAV template wasn’t there when I tried to run Thomas’ first step. Wolfgang’s solution worked for me as well: just uninstall and reinstall SyncEvolution.

$ root
# apt-get remove syncevolution syncevolution-frontend
# apt-get install syncevolution syncevolution-frontend

I suspect you’ll need to do this if you initially installed SyncEvolution before it included WebDAV support.

Configuration

After reinstalling, I was successfully able to follow Thomas’ instructions (ignore the “backend failed” notices in the first command):

syncevolution --configure --template webdav username=YOURUSERNAME password=YOURPASSWORD target-config@sogo
syncevolution --configure database=CALDAVURL backend=caldav target-config@sogo calendar
syncevolution --configure database=CARDAVURL backend=carddav target-config@sogo contacts

The CalDAV URL for your default SOGo calendar is http://YOURSOGOINSTALL/dav/YOURUSERNAME/Calendar/personal and the CardDAV URL for your default SOGo addressbook is http://YOURSOGOINSTALL/dav/YOURUSERNAME/Contacts/personal. Your can right-click on any additional calendars in SOGo and select Properties > Links to find the CalDAV link for that particular calendar.

I ran into another issue with the next step in Thomas’ instructions. The above commands created new configuration files in /home/user/.config/syncevolution/sogo/, but the following commands operate on /home/user/.config/syncevolution/default/, in which I already had existing, older SyncEvolution configuration files. SyncEvolution complained about my pre-existing configuration, probably because I’d installed a much earlier version of SyncEvolution, and it said that I’d need to “migrate” with the following command:

syncevolution --migrate '@default'

Again, I suspect you’ll need to run this if you’d installed SyncEvolution pre-1.2. After this, I was able to continue with Thomas’ instructions.

In the following command, the username/password should stay blank:

syncevolution --configure --template SyncEvolution_Client sync=none syncURL=local://@sogo username= password= sogo

Then, configure the databases, backend and sync mode for calendar and contacts:

syncevolution --configure sync=two-way backend=calendar database=N900 sogo calendar
syncevolution --configure sync=two-way backend=contacts database=file:///home/user/.osso-abook/db sogo contacts

I’m running SSL on my server, so I had to add this step to get past an SSL error:
syncevolution --configure SSLVerifyServer=0 target-config@sogo

(I bet there’s a way to configure it to properly verify the SSL certificate… but I’ll save that for another day.)

Testing

To test the configuration:

syncevolution --print-items target-config@sogo calendar
syncevolution --print-items target-config@sogo contacts

If that shows the data you expect to be there, then go ahead and run your first sync.

First Sync

SyncEvolution has several sync modes. The above commands configured the default mode to be ‘two-way’, but if you have initial data on both your client and server, you’ll want to run a ‘slow’ sync first.

syncevolution --sync slow sogo

My initial slow sync took almost an hour for ~2540 calendar events and ~160 contacts.

(If you want to overwrite your client with data from the server, or vice versa, look up ‘refresh-from-client’ or ‘refresh-from-server’ instead of ‘slow’.)

Scheduling

After that initial sync, you can run a normal sync at anytime:

syncevolution sogo

While the command line is great for configuration and testing, you don’t want to open a terminal every time you want to sync your calendar. You could schedule the sync command via fcrontab, but the Maemo syncevolution-frontend GUI has a daily scheduler.

Maemo SyncEvolution Frontend

UPDATE: Syncing Multiple Calendars

I’ve adapted the above commands to create new target-configs for two other calendars I want to sync — my wife’s and my childcare calendar for my son. There may be a more elegant way to reuse the same target-config, but this works.

First, in the Calendar application, under Settings > Calendars, I created one for my wife’s calendar called “Heather” and one for my son’s calendar called “Noah.”

You can view all the available databases with the follow command:syncevolution --print-databases

You should see your new calendar listed here. It can be used by name, so long as that name is unique (and there aren’t any special characters to escape).

Then, adapting the above commands:
##### Heather
syncevolution --configure --template webdav username=MYUSERNAME password=MYPASSWORD target-config@sogoheather
syncevolution --configure database=HEATHERCALDAVURL backend=caldav target-config@sogoheather calendar
syncevolution --configure --template SyncEvolution_Client sync=none syncURL=local://@sogoheather username= password= heather@heather
# A one-way sync is fine here, because I just want to view my wife's calendar
syncevolution --configure sync=one-way-from-remote backend=calendar database=Heather heather@heather calendar
syncevolution --configure SSLVerifyServer=0 target-config@sogoheather
syncevolution --print-items target-config@sogoheather calendar
# no need for a first slow sync with one-way mode set
syncevolution heather
##### Noah
syncevolution --configure --template webdav username=MYUSERNAME password=MYPASSWORD target-config@sogonoah
syncevolution --configure database=NOAHCALDAVURL backend=caldav target-config@sogonoah calendar
syncevolution --configure --template SyncEvolution_Client sync=none syncURL=local://@sogonoah username= password= noah@noah
syncevolution --configure sync=two-way backend=calendar database=Noah noah@noah calendar
syncevolution --configure SSLVerifyServer=0 target-config@sogonoah
syncevolution --print-items target-config@sogonoah calendar
# refresh-from-remote is faster than slow, and I know the local calendar is empty
syncevolution --sync refresh-from-remote noah

YMMV and you may want different configuration for your additional calendars, but this should give you some examples for how to configure additional calendars. The key different in these commands, besides the straight replacements, is to add a unique source name to all the –configure commands from SyncEvolution_Client on (except the SSL fix for the target-config), so that the client config ends up distinct from your primary calendar above.

Lastly, using the syncevolution-frontend, I scheduled daily automatic syncs for these two calendars as well, at different times.

Conclusion

I’m not sure if there’s a more elegant/concise configuration. I’m curious if there’s some way to combine the ‘target-config’ and ‘sogo’ steps… but Thomas spent over 12 hours on this and it works, so I’m not going to mess with it. I’m just thrilled that I’ve got this up and running.

After more than a decade in proprietary software slavery, and nearly two years of wandering in the calendar/contacts desert, I’ve finally reached the promised land of seamless and libre mobile, web and desktop calendar/contact sync. [Edit: Almost: The Maemo calendar application is proprietary…] Thank you, Thomas!

Creative Commons Attribution-ShareAlike 4.0 International Permalink | Post a Comment

On Revoking Ubuntu’s Root Privileges

I’ve always had mixed feelings about Canonical, the company behind Ubuntu GNU/Linux. While they’ve made great contributions to free software, they’ve also been very inconsistent in their commitment to software freedom. Mark Shuttleworth’s response to the privacy concerns in Ubuntu 12.10 has fundamentally shattered my trust.

An Uneasy History

From restricted drivers to Launchpad to non-free documentation licences, there have always been concerns about Canonical’s commitment to free software. By 2010, the issues were becoming more serious. Ubuntu used to clearly warn users about restricted drivers, but in the Ubuntu Software Center, no longer is proprietary software merely tolerated, but now it’s celebrated and actively promoted. The average user doesn’t interact with Launchpad, but with Ubuntu One, Canonical’s proprietary service, users must delete, disable or ignore all of the places where it’s built-in to the Ubuntu experience. The concerns were starting to affect my everyday use.

But, I didn’t leave. I uninstalled the Ubuntu One packages, and ignored the Software Centre. Though, I did start exploring my options, with a Debian dual-boot and Trisquel in a virtual machine. However, there are many things that I do like about Ubuntu. My Ubuntu install is still 99% free software. Despite the controversy over the design process and community engagement, there are many things I like about the Unity — the current obsession of Canonical’s founder, Mark Shuttleworth. I appreciate the outcome of his previous obsession as well — Ubuntu’s release cycle works really well. And, maybe there’s some sentiment — I’ve been running the same Ubuntu GNU/Linux install, across three different computers, since I first left Windows in 2007.

In 2010, my relationship with Ubuntu became uneasy, but it didn’t end. I’m not sure I can say the same for 2013.

The Amazon Dash Debacle

The EFF, RMS and this tongue-in-cheek bug report provide a decent summary the issue: Ubuntu 12.10 raises serious privacy concerns by reporting searches in the Unity Dash — which have traditionally been local searches — to Amazon, relayed through Canonical.

That Ubuntu screwed up is obvious — at the very least, by enabling this by default. But it’s more than the mistake; it’s the response. In defending the decision, Mark Shuttleworth writes:

We are not telling Amazon what you are searching for. Your anonymity is preserved because we handle the query on your behalf. Don’t trust us? Erm, we have root. You do trust us with your data already. You trust us not to screw up on your machine with every update. You trust Debian, and you trust a large swathe of the open source community. And most importantly, you trust us to address it when, being human, we err.

This doesn’t build my trust; this shatters it. I did not switch to a free software operating system to have the overlords flaunt their control over my computer. Canonical has done many annoying and prioprietary things in the past, but “Erm, we have root” is antithetical to the very notion of software freedom. Ubuntu does not have root access on my machine, nor does Canonical have access to my data. Yes, I must trust the Ubuntu project every time I run updates on my system, but this is a relationship and responsibility to be handled delicately, transparently, respectfully — not a position of power to be flaunted. I trust Ubuntu to maintain the software on my computer. That I trust Ubuntu to provide my system with security updates and bug fixes does not in any way give them licence to do other things, like relay my Dash searches to a third-party through a proprietary network service.

To make matters worse, Mark Shuttleworth recently referred to “who rant about proprietary software” as “insecure McCarthyists.” In response to a question about “decisions that have been less than popular with the Free-software only crowd,” Shuttleworth writes:

Well, I feel the same way about this as I do about McCarthyism. The people who rant about proprietary software are basically insecure about their own beliefs, and it’s that fear that makes them so nastily critical. […]

If you think you’ll convince people to see things your way by ranting and being a dick, well, then you have much more to learn than I can possibly be bothered to spend time teaching.

Aside from the pot-kettle-black nature of his tone, this does not build my trust in Canonical.

These responses strike at very heart of my decision to use GNU/Linux — software freedom. Canonical has never consistently cared about software freedom, but their offences and missteps have come closer and closer to my everyday computing. Now, a serious violation of privacy is brushed aside dismissively because I should just trust Ubuntu and Canonical because “erm, we have root,” and to raise concerns about proprietary software is akin to “McCarthyism.”

No, Mr. Shuttleworth, you don’t have root. The fact that you think you do makes me want to move far away from Ubuntu.

After Ubuntu: An Exit Strategy

I would rather not leave Ubuntu. I don’t take the decision lightly. But developments over the past few years have made me very uneasy, and Shuttleworth’s attitude has shattered any trust I ever had in Canonical. Even if Ubuntu fixes this particular problem, I’m not sure what can be done to rebuild trust.

At the very least, I’m preparing an exit strategy:

  1. I’m going to install GNOME 3 in Ubuntu (and maybe LXDE). I like many things about Unity, but adjusting to a different desktop environment will make leaving Ubuntu easier.
  2. Then, I’ll re-evaluate other GNU/Linux distributions. I really like Debian GNU/Linux — it’s just the release cycle that gets me for a primary machine, but I’ve heard good things about Debian testing for everyday use. I’ll also take another look at Trisquel.
  3. I may give Ubuntu 13.04 a chance. I don’t look forward to migrating to another distribution, and the Ubuntu GNOME Remix might be a compromise. Also, it’s not just me — my wife, father, and some machines at the office all run Ubuntu, as well as my living room and recording studio machine. I’m just not sure if I can trust Ubuntu anymore. So, seeing as it may take me a few months to try out other desktop environments and distributions, I may wait to see what changes in Ubuntu 13.04, and re-evaluate middle-ground options like the Ubuntu GNOME Remix, though I’m wary of just “fixing” the problem for myself.

I’ve been patient through many Canonical missteps, and I’ve defended the Ubuntu project over the years. But the “erm, we have root” response shatters my trust in any Shuttleworth-run endeavour. It’s antithetical to the reason I switched to GNU/Linux — software freedom — and I’ll switch again if that’s what it takes.

Creative Commons Attribution-ShareAlike 4.0 International Permalink | Comments (4)

Degooglifying (Part III): Web Search

This post is part of a series in which I am detailing my move away from centralized, proprietary network services. Previous posts in this series: email, feed reader.

Of all Google services, you’d think the hardest to replace would be search. Yet, although search is critical for navigating the web, the switching costs are low — no data portability issues, easy to use more than one search engine, etc. Unfortunately, there isn’t a straightforward libre web search solution ready yet, but switching away from Google to something that’s at least more privacy-friendly is easy to do now.

Quick Alternative: DuckDuckGo

In on sense, degooglifying search is easy: use DuckDuckGo. DuckDuckGo has a strong no-tracking aproach to privacy. The !bang syntax is awesome (hello !wikipedia), the search results are decent (though I still often !g for more technical, targeted or convoluted searches), it doesn’t have any search-plus-your-world nonsense or whatever walled garden stuff Google has been experimenting with lately, and it’s pretty solid on the privacy side. After just a few days, DuckDuckGo replaced Google as my default search engine, and my wife has since switched over as well.

The switch from Google Search to DuckDuckGo is incredibly easy and well worth it. If you’re still using Google Search, give DuckDuckGo a try — you’ve got nothing to lose.

But… DuckDuckGo isn’t a final destination. Remember: the point of this exercise isn’t for me to “leave Google,” but to leave Google’s proprietary, centralized, walled gardens for free and autonomous alternatives. DuckDuckGo is a step towards autonomy, as web search sans tracking, but it is still centralized and proprietary.

Web Search Freedom

A libre search solution calls for a much bigger change — from proprietary to free, from centralized to distributed, from a giant database to a peer-to-peer network — not just a change in search engines, but a revolution in web search.

YaCy

Last summer, I ran a search engine out of my living room for a few months: YaCy — a cross-platform, free software, decentralized, peer-to-peer search engine. Rather than relying on a single centralized search provider, YaCy users can install the software on their own computers and connect to a network of other YaCy users to perform web searches. It’s a libre, non-tracking, censorship-resistant web search network. The problem was that it wasn’t stable or mature enough last summer to power my daily web searches. I intend to install it again soon, because as a peer-to-peer effort it needs users and usage in order to improve, but an intermediate step like DuckDuckGo is necessary in the meantime.

Although YaCy is designed to be installed on your own computer, there is a public web search portal available as a demo.

Seeks

Seeks is another interesting project that takes a different approach to web search freedom. Seeks is “an open, decentralized platform for collaborative search, filtering and content curation.” As far as I understand, Seeks doesn’t replace existing search engines, but it adds a distributed network layer on top of them, giving users more control over search queries and results. That is, Seeks is a P2P collaborative filter for web search rather than a P2P indexer like YaCy. Rather than replacing web indexing, Seeks is focused on the privacy, control, and trust surrounding search queries and results, even if it sits on top of proprietary search engines.

Seeks also has a public web search portal (and DuckDuckGo supports !seeks). As you can tell, its results are much better than YaCy’s, but Seeks is tackling a smaller problem and still relying on existing search engines to index the web.

Conclusion

DuckDuckGo, though proprietary and centralized, provides some major privacy advantages over Google and is ready to be used today — especially with Google just a !g away.

But web search freedom requires a revolution like that envisioned by YaCy or Seeks. Seeks seems like more of a practical, incremental and realistic solution, but it still depends on proprietary search. YaCy is more of a complete solution, but it’s not clear whether its vision is technically feasible.

I intend to experiment with both of these projects — p2p services need users to improve — and continue to watch this space for new developments.

Creative Commons Attribution-ShareAlike 4.0 International Permalink | Post a Comment

Video: Accelerate by Alanna J Brown

A few weeks ago, I stood in with one of my absolute favourite Toronto artists, Alanna J Brown, in a music video for her song, Accelerate, the first single off her upcoming album. The video was conveniently shot in the same building in which I work… command line by day, mask-wearing bassist by night!

Accelerate – Alanna J Brown from Alanna J Brown on Vimeo.

I’m playing with Alanna next week, on the U of T campus.

Creative Commons Attribution-ShareAlike 4.0 International Permalink | Post a Comment

Degooglifying (Part II): Feed Reader

This post is part of a series in which I am detailing my move away from centralized, proprietary network services. Previous posts in this series: email.

Next to email, replacing Google Reader as my feed reader was relatively easy, though I’ve chosen to use the move as an opportunity to clean out my feed subscriptions, rather than doing a straight export/import. I’ve replaced Google Reader with two free software feed readers: Liferea (desktop) and Tiny Tiny RSS (web).

A reading list can be very personal, and it can also be very misleading out of context. For example, my reading list suggests all sorts of things about my religious and political views, about the communities to which I may be connected, etc. Though, it would take some analysis to try and figure out why I subscribe to any particular feed. Is the author’s view one I espouse and whole-heartedly hold as my own? One I find interesting, challenging, or thought-provoking? Or one I utterly disagree with yet want to learn more about?

There is something private about a complete reading list, much like the books you might check out from the library or the videos you might rent from a store. As we get more of this content through the internet, it’s easy for these lists (and even more behavioural data about how we interact with them) to be compiled in large, centralized, proprietary databases, alongside all sorts of other personal information that would not be available to a traditional Blockbuster or public library. Besides the software fredom issues, this is another revealing personal dataset that I can claim more control over by exercising software freedom, rather than dumping it into a big centralized, proprietary database. Both software freedom and privacy issues are at play here.

Desktop Client: Liferea

Liferea is a desktop feed reader for GNU/Linux. Google Reader was my first feed reader, so a desktop feed reader was a bit of an adjustment, but there are a few things I really like about it:

  • Native application: It integrates well with my desktop, with something like Ubuntu’s Messaging Menu, and it’s a client that feels somewhat familiar in GNOME.
  • Control over update frequency: One of the things that bugged me about Google Reader is it constantly checks for new content, whether or not you want it to. Sometimes, I don’t want to see anything new until tomorrow. It’s nice to be able to click update, read, and then let it be until I choose to update again. (Though, the downside is missing material if you don’t update often enough.)
  • Integration with Google Reader / Tiny Tiny RSS: This is a killer feature. You can use Liferea to read feeds through the Google Reader API, and recent versions have added support for a tt-rss backend as well. This helped with my transition because I could use Liferea as a front-end for Google Reader before I was prepared to migrate my feeds, to test it out, to ease the transition, etc. And, I will be able to use Liferea and tt-rss together to have both desktop and web-based clients.
  • Embedded Web Browser: This is also a killer feature. Websites that don’t have full-text feeds and only offer a content snippet are annoying in Google Reader, because you have to leave Reader to see the full content. But, in Liferea, you can tell it to automatically load content for a feed using the embedded web browser instead of just viewing the snippet, or hit enter on any feed entry to load the URL using the embedded browser. It even has basic tabbed browsing support, so you don’t have to flip back and forth between your web browser and your feed reader. This makes reading content from non-full-text feeds easy without leaving Liferea.
  • Integrated Comments: Liferea can detect comment feeds on many blogs, and it shows a handful of comments underneath entries. Combine this with a quick enter key to visit the web page with the embedded browser, and you no longer have to leave the feed reader to participate in the comments. This is a nice step up from the usual isolation of a feed reader from comment threads.
  • Authentication support for protected feeds: This is a useful feature for subscribing to protected content, such as an updates feed on an internal wiki.

I tested Liferea as a Google Reader front end, then migrated subscriptions group by group (giving me a chance to re-organize, though I could have just used an OPML export/import), and once I upgrade to Liferea 1.8, I’ll connect it to tt-rss.

Other Desktop Clients: RSSOwl is a free software, cross-platform (Windows, Mac OS X, GNU/Linux) feed reader, which also has Google Reader integration. I have only tried this briefly, so that I could recommend it to Windows users.

Web Client: Tiny Tiny RSS

Tiny Tiny RSS is a web-based feed reader, similar to Google Reader, but free software that you can run on your own web server. There are some feeds I read all the time, and others I’ll skim or catch up on when I have a chance. For the must-read feeds, it makes a huge difference to be able to read them from my mobile computer. With Google Reader, I used grr, and there is a mobile web interface. I migrated my must-read feeds to tt-rss instead of Liferea so that I’d have easy access to them while away from my laptop, while still having the ability to use Liferea when on my laptop with it’s tt-rss integration. I’m moving more and more feeds into tt-rss, though I plan to leave some less frequently updated, less important feeds or feeds that are difficult to read from my mobile in Liferea only.

Some cool features:

  • Publish articles to shared feed: Google Reader had a shared articles RSS feed, and I’d piped that into blaise.ca. tt-rss has a similar RSS feed, which I’ve also been able to include on my website
  • Mobile web interface: tt-rss has a mobile web interface for webkit browsers powered by iUI. With Macuco on my N900 or the Android web browser, it works quite well — though, only for full-text feeds.
  • Filters: With tt-rss, you can create filters on feeds. So, for example, I am automatically publishing articles from the Techdirt feed that I’ve written, or I can auto-delete posts for a particular series or author that I’m not interested in to custom tailor a feed to my interests. It’s very useful for automating certain actions or reducing noise on a high-traffic feed.
  • Custom CSS: I suppose you could customize Google Reader’s styles with a GreaseMonkey script or something, but tt-rss offers custom CSS overrides and multiple themes out of the box, which is great for setting some more readable default colours.
  • API: tt-rss has an API, which allows for Liferea integration, an Android client, etc.
  • Authentication support for protected feeds: Like Liferea, tt-rss provides support for feeds requiring authentication.

As with Liferea, tt-rss gives me control over how frequently updates run, since I schedule the update job. But that control also comes without the downside of missing content if I’m away from my feed reader for a while; unlike a desktop client that needs to be open to retrieve new content, tt-rss does so in the background from the server, so it can still track new entries while I’m away. It has the benefits of Google Reader’s persistent background updates, while still giving me control over frequency and scheduling. I have the update job set to run a few specific times through the day, and tt-rss gives you the option to set an even longer update interval for any given feed.

While I was initially migrating from Google Reader to Liferea, Tiny Tiny RSS is quickly becoming my primary feed reader, while Liferea will become my primary desktop client for tt-rss and home for less frequent/important/non-full-text feeds.

Other Web Clients: NewsBlur is another web-based, free software feed reader, which is based on a more modern web stack and seems to have some neat features. I have yet to try it, and I’m not sure of the state of its mobile or API/desktop integration, which are two things I really like in tt-rss. It’s worth taking a look at though for sure. NewsBlur.com has a hosted service, if you aren’t able to run your own web server or don’t have a friend who’s running one.

Conclusion

My migration away from Google Reader is essentially complete. I have less than a dozen feeds remaining there, but mostly old or broken feeds. I no longer log into Google Reader to read anything, though I’ve got one more round of cleaning to do to empty my account. I’m currently split between Liferea and tt-rss, but with Liferea 1.8, I’ll be able to integrate the two. I also have other libre options to explore with NewsBlur and RSSOwl.

There is nothing that I miss about Google Reader, and if anything, with an embedded browser, native desktop options, integrated comments, control over update scheduling, feed filters, and authentication support for protected feeds, I have a lot of useful features now that I didn’t have with Google’s proprietary service — nevermind more software freedom and less surveillance.

Creative Commons Attribution-ShareAlike 4.0 International Permalink | Comments (5)

Can Facebook Really Bring About A More Peer-to-Peer, Bottom-Up World?

This post originally appeared on Techdirt

Mark Zuckerberg’s letter to shareholders included in Facebook’s IPO filing contains a pretty bold vision for Facebook to not just connect people and enable them to share, but to fundamentally restructure the way that the world works:

By helping people form these connections, we hope to rewire the way people spread and consume information. We think the world’s information infrastructure should resemble the social graph — a network built from the bottom up or peer-to-peer, rather than the monolithic, top-down structure that has existed to date. We also believe that giving people control over what they share is a fundamental principle of this rewiring.

We have already helped more than 800 million people map out more than 100 billion connections so far, and our goal is to help this rewiring accelerate. [emphasis added]

That sounds pretty lofty, but if you recognize that Facebook provides a social networking service that hundreds of millions of people use — but forget for a moment that it’s Facebook — it’s quite a bold “social mission.” And there are many examples of how the service has been used as a key tool in affecting change on everything from opposition to the Canadian DMCA to the Arab Spring. There’s no doubt that the service makes it easier for people to organize in a more bottom-up way.

But, once you remember that it’s Facebook we’re talking about, the vision sounds more problematic. Could Facebook ever truly bring about a peer-to-peer, bottom-up network? The notion seems to be an inherent contradiction to Facebook’s architecture — as a centralized, proprietary, walled garden social networking service. Facebook may enable a more bottom-up structure, but it’s a bit disingenuous for Zuckerberg to decry a monolithic, top-down structure when Facebook inserts itself as the new intermediary and gatekeeper. As a centralized, proprietary, walled garden service, Facebook is a single point for attacks, control, and surveillance, never mind controversial policies or privacy concerns. Facebook may enable a more bottom-up and peer-to-peer network compared to many things that came before, but there is something fundamentally at odds with a truly distributed solution at the core of its architecture and its DNA.

To realize the full potential of bottom-up, peer-to-peer social networking infrastructure, we need autonomous, distributed, and free network services — the sort of vision that StatusNet/Identi.ca or Diaspora have tried to bring about. Rewiring the world to create a more bottom-up, peer-to-peer network is a bold vision for Zuckerberg to put forth — and one that Facebook has advanced in many ways — yet it’s fundamentally at odds with the reality of Facebook as a centralized and proprietary walled garden.

Read the comments on Techdirt.

Creative Commons Attribution-ShareAlike 4.0 International Permalink | Comments Off on Can Facebook Really Bring About A More Peer-to-Peer, Bottom-Up World?

The Songwriters Association of Canada Wants To Embrace File Sharing, But Does It Have the Right Approach?

This post originally appeared on Techdirt.

in 2007, the Songwriters Association of Canada gained some international headlines with a proposal to legalize non-commercial peer-to-peer file sharing through an ISP levy. This sort of proposal wasn’t new, but had not been so prominently put forth by an artist organization before. There were serious problems with the proposal, but it stimulated a healthy debate and it started from many correct premises — that file sharing should be embraced, that digital locks and lawsuits were not a way forward, etc. But it was a non-voluntary, “you’re a criminal” tax that could open the floodgates for other industries to demand similar levies.

I was a member of the Songwriters Association of Canada from 2007-2011, and I had the opportunity to express my concerns about the proposal to many people involved. Last year, I attended a session with an update on the proposal, and was surprised how much it had changed. The proposal had dropped the legislative angle in favor of a business to business approach, with an actual opt-out option for both creators and customers of participating ISPs. Unlike groups behind other licensing proposals, the SAC has actually been responsive to many concerns, and unlike other artist groups, the SAC takes a decidedly positive view on sharing music and the opportunities technology provides to creators. This attitude comes through in the proposal:

Rather than a legislative approach to the monetization of music file-sharing as we originally envisioned, the S.A.C. is now focused on a “business to business” model that requires no new legislation be enacted in Canada.

Our basic belief however remains the same: Music file-sharing is a vibrant, open, global distribution system for music of all kinds, and presents a tremendous opportunity to both creators and rights-holders. […]

People have always shared music and always will. The music we share defines who we are, and who our friends and peers are. The importance of music in the fabric of our own culture, as well as those around the world, is inextricably bound to the experience of sharing. [emphasis changed]

As the copyright debate heats up again in Canada in light of SOPA and new pressures on pending legislation, this positive attitude towards peer-to-peer file sharing was expressed again in a recent TorrentFreak interview with the SAC VP, Jean-Robert Bisaillon:

We think the practice [of file-sharing] is great and unstoppable. This is why we want to establish a regime that allows everyone to keep on doing it without stigmatizing the public and, instead, find a way for artists and rights holders to be fairly compensated for the music files that are being shared. […]

Other positive aspects include being able to find music that is not available in the commercial realm offer, finding a higher quality of digital files, being able to afford music even if you are poor and being able to discover new artists or recommend them to friends. […]

Music is much better off with the Web. The internet network allows for musical discovery despite distance and time of the day. It has sparked collaborations between musicians unimaginable before. It has helped artists to book international tours without expensive long-distances charges and postal delays we knew before. [emphasis added]

However, significant problems remain with the proposal. For example, the original criticism still stands as to how this would scale for other industries — what about book publishers, newspapers, movie studies, video game manufacturers and other industries that are also crying foul about “piracy”? The SAC dismisses other cultural industries pretty quickly, as if only the music industry is concerned about unauthorized copying. And, just like private copying levies have suffered from scope creep, as people no longer buy blank audio cassettes or CDs, or short-sightedness, as technology changes rapidly, it’s not clear how the SAC model would adapt to growing wireless and mobile computing or more distributed file sharing. Many more questions remain: Would small, independent artists, who are not charting through traditional means, get fair treatment? Is it wise to largely rely on a single, proprietary vendor, Big Champagne, for tracking all distribution? Would consumers be paying multiple times for music? What does it mean to “self-declare not to music file-share” in order to opt-out?

But the central problem with the proposal is the SAC’s copyright crutch. Jean-Robert Bisaillon says things like,

The Internet has dramatically increased the private non-commercial sharing of music, which we support. All that is missing a means to compensate music creators for this massive use of their work. [emphasis added]

And the proposal says things like,

Once a fair and reasonable monetization system is in place, all stakeholders including consumers and Internet service providers will benefit substantially. [emphasis added]

The SAC seems obsessed with a “monetization system,” when the truth is there is no one model, no magic bullet. Rather, the the sky is rising and the path to success involves all sorts of different models and creative approaches, most of which don’t depend on copyright or worrying about getting paid for every use. Even a voluntary license plan is still a bad idea. The means to compensate music creators isn’t missing, it’s just increasingly found outside of copyright.

Still, it’s important for the SAC’s voice to be heard as the copyright debate heats up again in Canada. As a creator group offering a positive take on peer-to-peer file sharing, and denouncing an “adversarial relationship” between creators and fans, they offer an important counterpoint to the SOPA-style provisions being pushed by Canadian record industry groups. I would take the SAC’s constructive and responsive approach over record industry astroturfing and fear mongering any day.

Read the comments on Techdirt.

Creative Commons Attribution-ShareAlike 4.0 International Permalink | Comments Off on The Songwriters Association of Canada Wants To Embrace File Sharing, But Does It Have the Right Approach?