Blog - Unity Behind Diversity

Searching for beauty in the dissonance

Free Software

HOWTO: Redmine 3.0 with Debian Wheezy, Nginx, Thin

I spent a few hours troubleshooting a problem with Thin when I upgraded from Redmine 2.5 to Redmine 3.0 on a Debian Wheezy server. I found a solution that’s worked for me. I’m not confident enough with Ruby and this setup to make a HowTo on the Redmine wiki, but I found next to nothing on this specific problem when searching the web, so I figure it’s important to post this in case it helps anyone else.

I’m running Redmine on Debian Wheezy with Nginx and Thin. I’m also running Redmine on another Wheezy server with Apache (and mod_passenger, I think). The latter upgrade to Redmine 3.0 went fine, but when I ran the same steps on the thin/nginx server, I was getting a Bad Gateway 502 error from nginx and found this in the thin logs.

!! Unexpected error while processing request: uninitialized constant Rack::MethodOverride::REQUEST_METHOD

Yet, when I ran Redmine with webrick (per Redmine’s installation instructions), it worked fine. Since it worked fine with webrick and on my other server, it seemed like the problem was at the Thin layer.

This Stack Overflow issue was the closest I could find, though it was with a different Rack application. The problem was a mismatch between the Rack version and the one required by the application.

I couldn’t find Redmine-specific examples (hence this post), but this one Redmine guide did say “Rack 1.0.1. Version 1.1 is not supported with Rails 2.3.5”.

Finally, the Stack Overflow issue linked to this issue in the Passenger tracker which pointed me towards the answer:

Your system has two Rack versions installed. One is version 1.5.0, installed by APT, and is located in /usr/lib/ruby/vendor_ruby. The other one is version 1.6.0, installed by RubyGems, and is located in /var/lib/gems/2.1.0/gems/rack-1.6.0.

Before Passenger loads your app, Passenger calls require “rack”. Because /usr/lib/ruby/vendor_ruby is in Ruby’s $LOAD_PATH, Passenger loads the Rack 1.5.0 library installed by APT.

However Sinatra requires Rack 1.6.0 or later…

This was my problem. When I installed Redmine 2.5, I ran `apt-get install thin`. It pulled in ruby-rack 1.4.1 as a dependency. This “conflict” wasn’t a problem in Redmine 2.5, which has “rack (~> 1.4.5)” in Gemfile.lock — the versions are close enough. However, Redmine 3.0 has “rack (~> 1.6)” in Gemfile.lock… hence the error I was seeing, as Rack 1.4.1 installed via apt was probably being loaded in place of the 1.6.1 Gem.

I tried to `apt-get remove ruby-rack`, but it was going to remove thin as well. (And I checked the Jessie repos, but its ruby-rack is still only 1.5.2.) I identified two solutions:

  1. Uninstall ruby-rack and thin via apt, and reinstall thin separately (this worked)
  2. Create a dummy .deb using equivs to install thin via apt without really installing ruby-rack (I didn’t bother trying this, since the first solution worked)

To install thin separately, first I removed it and ruby-rack whiling marking a couple other dependencies as manually installed and keeping the /etc/init.d/thin file…

apt-get remove ruby-rack thin
apt-get install ruby-eventmachine ruby-daemons # not sure if this was necessary or advisable, just a guess

Then, following the thin installation instructions, I was able to install the gem:
apt-get install ruby-dev build-essential
gem install thin
# Update the path in /etc/init.d/thin from /usr/bin/thin (apt) to /usr/local/bin/thin (gem)
perl -pi -w -e 's/\/usr\/bin\/thin/\/usr\/local\/bin\/thin/g' /etc/init.d/thin

I was able to start thin again (`service thin start`), but I was getting a new error for which I found the solution here: add thin to your Gemfile.

So, somewhat reluctantly, I opened up Gemfile in the Redmine root directory and in between a couple other gem lines I added the line:
gem "thin"

Then, I restarted thin, and Redmine was working again!

Things I don’t like about this solution or am unsure of:

  • Editing Redmine’s Gemfile sucks, because I’ll lose that change on every update and I’ll have to re-apply it. Since it’s a simple one-liner and updates are every few months, it works for me for now.
  • I don’t know yet whether those other apt thin dependencies are required or might cause other conflicts in the future… but since it’s working now, and I’m not familiar with Ruby, I don’t feel like spending more time to experiment and find out.
  • Maybe I should have looked at other options beside thin, like Puma or Passenger? But since I already had Thin working before, I just decided to see if I could salvage it rather than exploring alternatives. Maybe thin isn’t the best option in this circumstance though, but it’s working.
  • I’m assuming that /etc/init.d/thin was there and working because it was leftover from the apt thin installation (and because I didn’t `apt-get purge`). That may have been lucky that the init script happens to work (as far as I can tell for now) with the thin gem…

Hopefully this can help anyone else using Redmine(3.0)/Debian/Nginx/Thin seeing that error. I’d be happy to share configuration and fresh installation instructions once I have some confidence that this approach is sane, and in particular once I have a better solution than modifying the Gemfile.

Creative Commons Attribution-ShareAlike 4.0 International Permalink | Comments (1)

HOWTO: Pair a new device for the old Firefox Sync service in Firefox 30

I got into a public fight with IceWeasel/Firefox 30 and the Mozilla sync service on pump.io last month, and was meaning to publish my “fix”… but it was so hacky, I don’t know which part of it actually worked. But, since it’s somewhat time-sensitive during this sync service transition, I figure it’s better to share this incomplete hack than to not.

The Problem: Can’t Pair New Devices in Firefox/IceWeasel 30 Using the old Firefox Sync Service

I recently switched my ThinkPad X60 from Ubuntu to Debian testing. When I tried to set up IceWeasel 30 with the Mozilla sync service, it started prompting me about creating a Firefox account — something I have absolutely no interest in doing (in fact, I was planning on moving my Firefox sync to off Mozilla’s servers to ownCloud).

I discovered that, while previously paired devies would still be able to sync using Mozilla’s old sync service for a limited time, as of Firefox/IceWeasel 30, it no longer supports pairing new devices to the old sync service.

This made me really angry. If I’d set up sync and paired the device before “upgrading” to IceWeasel/Firefox 30, I’d be syncing no problem, but Firefox/IceWeasel 30 refused to allow this. It was an infuriating combination of what felt like an anti-feature, and pressure from Mozilla to sign up for a new sync service that seems worse on the privacy front (e.g. no server-side encryption, and self-hosting is experimental now because you’d also have to self-host the Accounts service…).

The Solution: Tricking IceWeasel/Firefox by editing prefs.js

Technically, this wasn’t a new device. I’d already had my X60 Firefox set up to sync before I switched from Ubuntu to Debian. So, I managed to trick IceWeasel into letting me sync again.

This was pretty reckless (but stakes very low — brand new IceWeasel profile) and I’m not sure exactly what worked and use these instructions at your own risk, etc etc.:

  • I copied the weave folder from inside my old Ubuntu Firefox profile (not sure if that mattered), plus all of the lines in prefs.js for settings that started with “services.sync.*” (this definitely mattered)
  • I tried manually editing the preferences (resetting timestamps to zero, etc.), but what ended up happening is that when I opened IceWeasel with those lines just copy-pasted in from my Firefox profile in my old Ubuntu install that I’m no longer using, it gave me the “Pair a new Device” option the first time I accessed Sync settings!!
  • It would disappear and not come back if I cancelled pairing, but I just tried closing IceWeasel, copying/pasting those services.sync.* lines into prefs.js again, and then I successfully paired IceWeasel 30 by doing it the first time it appeared.
  • I could see “tabs from my other computers” now, but my bookmarks clearly weren’t there, so I shut IceWeasel down, and changed the value of all the services.sync.*.lastSync and services.sync.*.lastSyncLocal and a couple other similar timestamps, setting them to 0 from their prior values. Then, re-opened IceWeasel, ran the sync manually, and my bookmarks started appearing! Since then, it seems everything has been working fine

I think it was something in copying the services.sync* settings that allowed the Pair a New Device screen to work the first time I reopened IceWeasel. Then, after pairing, resetting the timestamps to 0 on the services.sync.*.lastSync* settings caused IceWeasel to download everything again anew.

YMMV. I’m not sure how much my of success depended on being able to hijack an existing client sync ID from a device that was previously configured but no longer being used (i.e. my former Ubuntu Firefox profile on my X60 that I was replacing with Debian IceWeasel). And these steps are vague and unspecific because I’m not really sure what precisely worked or what may be unwise for you to try if you don’t know what you’re doing… but feel free to contact me if you want more specifics on my set up and experience and I may be able to help.

At the very least, this will allow me to continue using the old sync service for now, until I figure out what my options are re: self-hosting, ownCloud, Mozilla’s new Firefox Accounts-based sync service, etc.

Creative Commons Attribution-ShareAlike 4.0 International Permalink | Comments (2)

SOLUTION: gPodder 2.20.3 on N900: database disk image malformed

After some strange behaviour in gPodder 2.20.3 yesterday on my N900 (not responding to episode actions), I quit gPodder and tried to start it up again, but it would crash during startup everytime with an error about “database disk image malformed” from line 316 of dbsqlite.py on the query: “SELECT COUNT(*), state, played FROM episodes GROUP BY state, played”.

First, I opened up the sqlite database directly:
sqlite3 ~/.config/gpodder/database.sqlite

I could run that query and others no problem.

However, I found this guide on repairing a corrupt sqlite database, I ran the following integrity check command and it returned a couple errors along with the “database disk image malformed” message:
sqlite> pragma integrity_check;

So I followed the instructions from spiceworks, dumped my database to file and reloaded it into a new database:
cd ~/.config/gpodder/
echo .dump | sqlite3 database.sqlite > gpodder.sql # generate dump file
mv database.sqlite database.sqlite.bak # backup original database
sqlite3 -init gpodder.sql database.sqlite # initialize a new database from the dump file

And, voila, gPodder is working again.

Creative Commons Attribution-ShareAlike 4.0 International Permalink | Post a Comment

Degooglifying (Part IV): Calendar

This post is part of a series in which I am detailing my move away from centralized, proprietary network services. Previous posts in this series: email, feed reader, search.

Finding a replacement for Google Calendar has been one of the most difficult steps so far in my degooglification process, but in the end I’ve found a bunch of great, libre alternatives.

Beyond the basic criteria for free network services, I was looking for:

  • desktop, web and mobile clients
  • offline access, especially for mobile
  • multiple calendars
  • access controls for sharing calendars
  • ability to subscribe and share calendars with other servers
  • applicable for business and personal use

First Attempt: SyncML using SyncEvolution and Funambol

I started with SyncML, an open standard for syncing calendar and contact data. SyncEvolution is a great SyncML client, with both GUI and command line tools available for GNOME and Maemo GNU/Linux, and Funambol is an AGPL SyncML server, with an Android client.

I setup Funambol and migrated from Google Calendar in July 2011, using SyncEvolution on my N900 and my laptop, but there were a bunch of problems. It was unstable around the edges, not handling deletes very well, and sometimes choking and failing with certain characters ( ” maybe?) in event titles. When I tried to switch my parents over in Android, it was a nightmare trying to figure out where the sync was failing, and they eventually moved to Google Calendar instead. SyncEvolution only syncs with Evolution on the desktop; there’s no mature SyncML solution for Lightning. The Funambol free software edition felt like a bit of an afterthought as well, with poor or outdated documentation, and a crippled, totally useless “demo” web UI. There was no calendar sharing or access controls either. Plus, Funambol is a pretty heavy application, targeted at mobile carriers, not someone who wants to run it from their living room.

SyncML with Funambol and SyncEvolution allowed me to leave Google Calendar behind, but I ended up living off my mobile calendar, using Funambol essentially as a backup service. I had no web client, no shared calendars, and eventually stopped syncing to Evolution on my laptop. Part of the problem was Funambol, but part of the problem was also SyncML, which seems to be a clunky standard, designed for an older paradigm of syncing with offline mobile clients.

I quickly realized that CalDAV was the better open standard.

The Solution: CalDAV

CalDAV is an extension of WebDAV, an internet standard for remote access to calendar data. It’s a more modern standard that SyncML — though SyncML does have better support on older mobile devices. (There’s also CardDAV for contacts, but I’ll leave that for a future post.)

Servers: SOGo, ownCloud or Radicale

However, there are a ton of CalDAV servers.

Here are my favourites so far:

Application Pros Cons
SOGo
[demo]
Works with anything via connectors; well-integrated with Thunderbird/Lightning, and web UI modelled after Lightning; Ubuntu/Debian repos UI isn’t super pretty; comes with a webmail client I don’t want; heavy, took some effort to install (e.g. made a custom MySQL user auth table, in the absence of an LDAP server)
ownCloud
[demo]
Very alive; support for contacts, photos, music, etc.; Ubuntu/Debian repos Newer (immature when I first tried in 2011); seemed more of a personal than business tool, but that may have changed.This has changed. As of 2015, ownCloud is strong, mature and thriving.
Radicale Simple, elegant, light-weight For sysadmins: no UI

I tried a few others, but I wouldn’t recommend them:

  • Funambol CalDAV connector: In theory, best of both worlds with SyncML and CalDAV support, but I couldn’t figure out if there was an updated stable version, how to get it working with Funambol, etc., and this would still carry the Funambol issues and lack a web client or CardDAV support
  • DAViCal: seemed robust, but also onerous to configure and administer, and the web UI is only for administration (no web calendar client). This could work, but it just felt a bit onerous to use.
  • Update: lnxwalt mentions PHP Web Calendar, which I’d missed. I tried the online demo, but it looks/feels pretty ~2005: awkward and not fully-featured UI, focus on old standares like iCal (rather than true CalDAV?), with a CVS wishlist that includes SyncML support and a Java servlet, and import/export from Palm as a key feature, etc.

Others I didn’t bother to try:

  • Zimbra: Seemed like heavy-duty Groupware with a bunch of things I didn’t need or want — though could make sense if that’s what you’re looking for.
  • Horde (Kronolith): I did try Horde, but using the old interface a few years back. That UI felt 10+ years old, but it’s since undergone a complete overhaul and I haven’t looked at it since. Also, a groupware suite, which may be a plus or a minus. However, I don’t think it uses real CalDAV
  • Bedework: Java, seems heavy, without any obvious benefits or easy packaging
  • Apple Calendar and Contacts Server: while Apache licensed, it really doesn’t seem to be designed to enable other people to run the software — I didn’t get very far looking into this
  • Update: Jean Baptiste Favre has a great tutorial on implementing SabreDAV, a PHP library which implements WebDAV and its CalDAV and CardDAV extensions, if you want to build your own solution.

I’m using SOGo. Though, that’s partially because it was the most comprehensive solution that I had working at the time when my wife went back to work after maternity leave and we needed sharable calendars again to coordinate scheduling for childcare. But SOGo also has some nice, more advanced features, like the ability to subscribe to remote CalDAV feeds on other servers through the web UI.

I’m pretty happy with SOGo, though I’ll certainly be revisiting ownCloud and Radicale at some point. When I first tried ownCloud, it was immature, but it’s since grown a lot. And when I first tried Radicale, it was using a “strange” ACL model, but that’s been overhauled in 0.8. DAViCal was working, though it wasn’t a pleasure to configure, and I’m sure there are a few other workable servers I passed over.

I highly recommend ownCloud. At the end of 2014, I switched from SOGo to ownCloud, and have not looked back. ownCloud has a better web UI, has a much stronger and vibrant community, is alive and growing, is much easier to host (e.g. repos for popular GNU/Linux distributions, and GLAMP stack), and is useful for more than just CalDAV (I’m already using it for file syncronization and CardDAV as well).

Desktop Client: Lightning

Since I’m a Thunderbird/IceDove user, Lightning is the obvious choice for a desktop client. We also use Thunderbird at the office and in my family. Lightning also supports Google Calendar, so just like with degooglifying email, you can switch your frontend and backend in separate steps.

The Evolution calendar is pretty awkward. I tried it when I was using SyncML, but it didn’t last long. There are other options too.

Web Client: SOGo, ownCloud or CalDavZap

I’d prefer a server with a web client, like SOGo or ownCloud, but for a standalone CalDAV web client (e.g. to pair with Radicale or DAViCal), CalDavZap [demo] seems pretty cool.

Mobile Client: SyncEvolution or aCal

Maemo: The reason I spent so much time on SyncML was that there was no CalDAV client for Maemo, but now SyncEvolution supports CalDAV/CardDAV sync!

Android: Use Davdroid. It syncs CalDAV and CardDAV to native AOSP storage.

aCal is an Android CalDAV client, and a replacement for the proprietary Google calendar application. It works really well, but the UI feels really awkward and non-native. [Update: There’s also CalDAV-Sync, which I’d skipped over because it’s proprietary, but maiki pointed out that the developer at least intends to open source it eventually. I’m not sure if the Android Calendar is free software or one of the proprietary “Google experience” apps?] Both sync to local storage for offline support.

Conclusion

It took me a long time to figure this out, especially since I was focused on SyncML at first, but I’ve finally fully replaced Google Calendar with CalDAV solutions. SOGo, ownCloud and Radicale are all great CalDAV servers. SOGo and ownCloud have built-in web clients, but there’s also CalDavZap as a standalone web client. Lightning is the obvious cross-platform desktop CalDAV client of choice, and SyncEvolution and aCalDavdroid provide mobile clients for Maemo and Android.

The good news is there are plenty of options. As a bonus, most of these come with CardDAV support (which will be the focus of a future post), and ownCloud handles photos, music, and other files as well, so you may get more than just a calendar. Or, if it’s just a calendar you want, light-weight solutions like Radicale and CalDavZap give you just that.

I’m just thrilled to have finally figured this out.

Creative Commons Attribution-ShareAlike 4.0 International Permalink | Comments (20)

HOWTO: CalDAV/CardDAV Sync from N900 to SOGo using SyncEvolution

When I moved to Maemo in 2010, I was using Google Calendar. I setup a sync via Exchange and eventually Erminig, which allowed me to sync my wife’s Google calendar too. But, when I started degooglifying and moving to free network services, I left Google Calendar for Funambol, using SyncEvolution as a Maemo SyncML client.

This was far from ideal: we lost shared calendars, there was no web UI, and desktop SyncML options were lacking. I quickly realized that CalDAV would be the better long-term option. I choose SOGo as my CalDAV server, but I couldn’t find a CalDAV client for the N900. (I tried the Funambol SOGo Connector. but just couldn’t figure it out.)

I’d just about given up on a comprehensive sync solution in Maemo… until I hit the jackpot a few days ago and stumbled upon a post by Thomas Tanghus on a CalDAV/CardDAV sync from the N900 to ownCloud using SyncEvolution.

It looks like SyncEvolution gained CalDAV/CardDAV support in version 1.2 — the N900 has a CalDAV client!

CalDAV/CardDAV Sync using SyncEvolution

Thomas’ instructions were for ownCloud, but they work for any CalDAV/CardDAV server. I only ran into two issues, I think because I’d been using SyncEvolution pre-1.2. The steps included here are 90% from Thomas, with those two additions.

Reinstallation

First, I ran into the same problem as Wolfgang: the SyncEvolution WebDAV template wasn’t there when I tried to run Thomas’ first step. Wolfgang’s solution worked for me as well: just uninstall and reinstall SyncEvolution.

$ root
# apt-get remove syncevolution syncevolution-frontend
# apt-get install syncevolution syncevolution-frontend

I suspect you’ll need to do this if you initially installed SyncEvolution before it included WebDAV support.

Configuration

After reinstalling, I was successfully able to follow Thomas’ instructions (ignore the “backend failed” notices in the first command):

syncevolution --configure --template webdav username=YOURUSERNAME password=YOURPASSWORD target-config@sogo
syncevolution --configure database=CALDAVURL backend=caldav target-config@sogo calendar
syncevolution --configure database=CARDAVURL backend=carddav target-config@sogo contacts

The CalDAV URL for your default SOGo calendar is http://YOURSOGOINSTALL/dav/YOURUSERNAME/Calendar/personal and the CardDAV URL for your default SOGo addressbook is http://YOURSOGOINSTALL/dav/YOURUSERNAME/Contacts/personal. Your can right-click on any additional calendars in SOGo and select Properties > Links to find the CalDAV link for that particular calendar.

I ran into another issue with the next step in Thomas’ instructions. The above commands created new configuration files in /home/user/.config/syncevolution/sogo/, but the following commands operate on /home/user/.config/syncevolution/default/, in which I already had existing, older SyncEvolution configuration files. SyncEvolution complained about my pre-existing configuration, probably because I’d installed a much earlier version of SyncEvolution, and it said that I’d need to “migrate” with the following command:

syncevolution --migrate '@default'

Again, I suspect you’ll need to run this if you’d installed SyncEvolution pre-1.2. After this, I was able to continue with Thomas’ instructions.

In the following command, the username/password should stay blank:

syncevolution --configure --template SyncEvolution_Client sync=none syncURL=local://@sogo username= password= sogo

Then, configure the databases, backend and sync mode for calendar and contacts:

syncevolution --configure sync=two-way backend=calendar database=N900 sogo calendar
syncevolution --configure sync=two-way backend=contacts database=file:///home/user/.osso-abook/db sogo contacts

I’m running SSL on my server, so I had to add this step to get past an SSL error:
syncevolution --configure SSLVerifyServer=0 target-config@sogo

(I bet there’s a way to configure it to properly verify the SSL certificate… but I’ll save that for another day.)

Testing

To test the configuration:

syncevolution --print-items target-config@sogo calendar
syncevolution --print-items target-config@sogo contacts

If that shows the data you expect to be there, then go ahead and run your first sync.

First Sync

SyncEvolution has several sync modes. The above commands configured the default mode to be ‘two-way’, but if you have initial data on both your client and server, you’ll want to run a ‘slow’ sync first.

syncevolution --sync slow sogo

My initial slow sync took almost an hour for ~2540 calendar events and ~160 contacts.

(If you want to overwrite your client with data from the server, or vice versa, look up ‘refresh-from-client’ or ‘refresh-from-server’ instead of ‘slow’.)

Scheduling

After that initial sync, you can run a normal sync at anytime:

syncevolution sogo

While the command line is great for configuration and testing, you don’t want to open a terminal every time you want to sync your calendar. You could schedule the sync command via fcrontab, but the Maemo syncevolution-frontend GUI has a daily scheduler.

Maemo SyncEvolution Frontend

UPDATE: Syncing Multiple Calendars

I’ve adapted the above commands to create new target-configs for two other calendars I want to sync — my wife’s and my childcare calendar for my son. There may be a more elegant way to reuse the same target-config, but this works.

First, in the Calendar application, under Settings > Calendars, I created one for my wife’s calendar called “Heather” and one for my son’s calendar called “Noah.”

You can view all the available databases with the follow command:syncevolution --print-databases

You should see your new calendar listed here. It can be used by name, so long as that name is unique (and there aren’t any special characters to escape).

Then, adapting the above commands:
##### Heather
syncevolution --configure --template webdav username=MYUSERNAME password=MYPASSWORD target-config@sogoheather
syncevolution --configure database=HEATHERCALDAVURL backend=caldav target-config@sogoheather calendar
syncevolution --configure --template SyncEvolution_Client sync=none syncURL=local://@sogoheather username= password= heather@heather
# A one-way sync is fine here, because I just want to view my wife's calendar
syncevolution --configure sync=one-way-from-remote backend=calendar database=Heather heather@heather calendar
syncevolution --configure SSLVerifyServer=0 target-config@sogoheather
syncevolution --print-items target-config@sogoheather calendar
# no need for a first slow sync with one-way mode set
syncevolution heather
##### Noah
syncevolution --configure --template webdav username=MYUSERNAME password=MYPASSWORD target-config@sogonoah
syncevolution --configure database=NOAHCALDAVURL backend=caldav target-config@sogonoah calendar
syncevolution --configure --template SyncEvolution_Client sync=none syncURL=local://@sogonoah username= password= noah@noah
syncevolution --configure sync=two-way backend=calendar database=Noah noah@noah calendar
syncevolution --configure SSLVerifyServer=0 target-config@sogonoah
syncevolution --print-items target-config@sogonoah calendar
# refresh-from-remote is faster than slow, and I know the local calendar is empty
syncevolution --sync refresh-from-remote noah

YMMV and you may want different configuration for your additional calendars, but this should give you some examples for how to configure additional calendars. The key different in these commands, besides the straight replacements, is to add a unique source name to all the –configure commands from SyncEvolution_Client on (except the SSL fix for the target-config), so that the client config ends up distinct from your primary calendar above.

Lastly, using the syncevolution-frontend, I scheduled daily automatic syncs for these two calendars as well, at different times.

Conclusion

I’m not sure if there’s a more elegant/concise configuration. I’m curious if there’s some way to combine the ‘target-config’ and ‘sogo’ steps… but Thomas spent over 12 hours on this and it works, so I’m not going to mess with it. I’m just thrilled that I’ve got this up and running.

After more than a decade in proprietary software slavery, and nearly two years of wandering in the calendar/contacts desert, I’ve finally reached the promised land of seamless and libre mobile, web and desktop calendar/contact sync. [Edit: Almost: The Maemo calendar application is proprietary…] Thank you, Thomas!

Creative Commons Attribution-ShareAlike 4.0 International Permalink | Post a Comment

On Revoking Ubuntu’s Root Privileges

I’ve always had mixed feelings about Canonical, the company behind Ubuntu GNU/Linux. While they’ve made great contributions to free software, they’ve also been very inconsistent in their commitment to software freedom. Mark Shuttleworth’s response to the privacy concerns in Ubuntu 12.10 has fundamentally shattered my trust.

An Uneasy History

From restricted drivers to Launchpad to non-free documentation licences, there have always been concerns about Canonical’s commitment to free software. By 2010, the issues were becoming more serious. Ubuntu used to clearly warn users about restricted drivers, but in the Ubuntu Software Center, no longer is proprietary software merely tolerated, but now it’s celebrated and actively promoted. The average user doesn’t interact with Launchpad, but with Ubuntu One, Canonical’s proprietary service, users must delete, disable or ignore all of the places where it’s built-in to the Ubuntu experience. The concerns were starting to affect my everyday use.

But, I didn’t leave. I uninstalled the Ubuntu One packages, and ignored the Software Centre. Though, I did start exploring my options, with a Debian dual-boot and Trisquel in a virtual machine. However, there are many things that I do like about Ubuntu. My Ubuntu install is still 99% free software. Despite the controversy over the design process and community engagement, there are many things I like about the Unity — the current obsession of Canonical’s founder, Mark Shuttleworth. I appreciate the outcome of his previous obsession as well — Ubuntu’s release cycle works really well. And, maybe there’s some sentiment — I’ve been running the same Ubuntu GNU/Linux install, across three different computers, since I first left Windows in 2007.

In 2010, my relationship with Ubuntu became uneasy, but it didn’t end. I’m not sure I can say the same for 2013.

The Amazon Dash Debacle

The EFF, RMS and this tongue-in-cheek bug report provide a decent summary the issue: Ubuntu 12.10 raises serious privacy concerns by reporting searches in the Unity Dash — which have traditionally been local searches — to Amazon, relayed through Canonical.

That Ubuntu screwed up is obvious — at the very least, by enabling this by default. But it’s more than the mistake; it’s the response. In defending the decision, Mark Shuttleworth writes:

We are not telling Amazon what you are searching for. Your anonymity is preserved because we handle the query on your behalf. Don’t trust us? Erm, we have root. You do trust us with your data already. You trust us not to screw up on your machine with every update. You trust Debian, and you trust a large swathe of the open source community. And most importantly, you trust us to address it when, being human, we err.

This doesn’t build my trust; this shatters it. I did not switch to a free software operating system to have the overlords flaunt their control over my computer. Canonical has done many annoying and prioprietary things in the past, but “Erm, we have root” is antithetical to the very notion of software freedom. Ubuntu does not have root access on my machine, nor does Canonical have access to my data. Yes, I must trust the Ubuntu project every time I run updates on my system, but this is a relationship and responsibility to be handled delicately, transparently, respectfully — not a position of power to be flaunted. I trust Ubuntu to maintain the software on my computer. That I trust Ubuntu to provide my system with security updates and bug fixes does not in any way give them licence to do other things, like relay my Dash searches to a third-party through a proprietary network service.

To make matters worse, Mark Shuttleworth recently referred to “who rant about proprietary software” as “insecure McCarthyists.” In response to a question about “decisions that have been less than popular with the Free-software only crowd,” Shuttleworth writes:

Well, I feel the same way about this as I do about McCarthyism. The people who rant about proprietary software are basically insecure about their own beliefs, and it’s that fear that makes them so nastily critical. […]

If you think you’ll convince people to see things your way by ranting and being a dick, well, then you have much more to learn than I can possibly be bothered to spend time teaching.

Aside from the pot-kettle-black nature of his tone, this does not build my trust in Canonical.

These responses strike at very heart of my decision to use GNU/Linux — software freedom. Canonical has never consistently cared about software freedom, but their offences and missteps have come closer and closer to my everyday computing. Now, a serious violation of privacy is brushed aside dismissively because I should just trust Ubuntu and Canonical because “erm, we have root,” and to raise concerns about proprietary software is akin to “McCarthyism.”

No, Mr. Shuttleworth, you don’t have root. The fact that you think you do makes me want to move far away from Ubuntu.

After Ubuntu: An Exit Strategy

I would rather not leave Ubuntu. I don’t take the decision lightly. But developments over the past few years have made me very uneasy, and Shuttleworth’s attitude has shattered any trust I ever had in Canonical. Even if Ubuntu fixes this particular problem, I’m not sure what can be done to rebuild trust.

At the very least, I’m preparing an exit strategy:

  1. I’m going to install GNOME 3 in Ubuntu (and maybe LXDE). I like many things about Unity, but adjusting to a different desktop environment will make leaving Ubuntu easier.
  2. Then, I’ll re-evaluate other GNU/Linux distributions. I really like Debian GNU/Linux — it’s just the release cycle that gets me for a primary machine, but I’ve heard good things about Debian testing for everyday use. I’ll also take another look at Trisquel.
  3. I may give Ubuntu 13.04 a chance. I don’t look forward to migrating to another distribution, and the Ubuntu GNOME Remix might be a compromise. Also, it’s not just me — my wife, father, and some machines at the office all run Ubuntu, as well as my living room and recording studio machine. I’m just not sure if I can trust Ubuntu anymore. So, seeing as it may take me a few months to try out other desktop environments and distributions, I may wait to see what changes in Ubuntu 13.04, and re-evaluate middle-ground options like the Ubuntu GNOME Remix, though I’m wary of just “fixing” the problem for myself.

I’ve been patient through many Canonical missteps, and I’ve defended the Ubuntu project over the years. But the “erm, we have root” response shatters my trust in any Shuttleworth-run endeavour. It’s antithetical to the reason I switched to GNU/Linux — software freedom — and I’ll switch again if that’s what it takes.

Creative Commons Attribution-ShareAlike 4.0 International Permalink | Comments (4)

Degooglifying (Part III): Web Search

This post is part of a series in which I am detailing my move away from centralized, proprietary network services. Previous posts in this series: email, feed reader.

Of all Google services, you’d think the hardest to replace would be search. Yet, although search is critical for navigating the web, the switching costs are low — no data portability issues, easy to use more than one search engine, etc. Unfortunately, there isn’t a straightforward libre web search solution ready yet, but switching away from Google to something that’s at least more privacy-friendly is easy to do now.

Quick Alternative: DuckDuckGo

In on sense, degooglifying search is easy: use DuckDuckGo. DuckDuckGo has a strong no-tracking aproach to privacy. The !bang syntax is awesome (hello !wikipedia), the search results are decent (though I still often !g for more technical, targeted or convoluted searches), it doesn’t have any search-plus-your-world nonsense or whatever walled garden stuff Google has been experimenting with lately, and it’s pretty solid on the privacy side. After just a few days, DuckDuckGo replaced Google as my default search engine, and my wife has since switched over as well.

The switch from Google Search to DuckDuckGo is incredibly easy and well worth it. If you’re still using Google Search, give DuckDuckGo a try — you’ve got nothing to lose.

But… DuckDuckGo isn’t a final destination. Remember: the point of this exercise isn’t for me to “leave Google,” but to leave Google’s proprietary, centralized, walled gardens for free and autonomous alternatives. DuckDuckGo is a step towards autonomy, as web search sans tracking, but it is still centralized and proprietary.

Web Search Freedom

A libre search solution calls for a much bigger change — from proprietary to free, from centralized to distributed, from a giant database to a peer-to-peer network — not just a change in search engines, but a revolution in web search.

YaCy

Last summer, I ran a search engine out of my living room for a few months: YaCy — a cross-platform, free software, decentralized, peer-to-peer search engine. Rather than relying on a single centralized search provider, YaCy users can install the software on their own computers and connect to a network of other YaCy users to perform web searches. It’s a libre, non-tracking, censorship-resistant web search network. The problem was that it wasn’t stable or mature enough last summer to power my daily web searches. I intend to install it again soon, because as a peer-to-peer effort it needs users and usage in order to improve, but an intermediate step like DuckDuckGo is necessary in the meantime.

Although YaCy is designed to be installed on your own computer, there is a public web search portal available as a demo.

Seeks

Seeks is another interesting project that takes a different approach to web search freedom. Seeks is “an open, decentralized platform for collaborative search, filtering and content curation.” As far as I understand, Seeks doesn’t replace existing search engines, but it adds a distributed network layer on top of them, giving users more control over search queries and results. That is, Seeks is a P2P collaborative filter for web search rather than a P2P indexer like YaCy. Rather than replacing web indexing, Seeks is focused on the privacy, control, and trust surrounding search queries and results, even if it sits on top of proprietary search engines.

Seeks also has a public web search portal (and DuckDuckGo supports !seeks). As you can tell, its results are much better than YaCy’s, but Seeks is tackling a smaller problem and still relying on existing search engines to index the web.

Conclusion

DuckDuckGo, though proprietary and centralized, provides some major privacy advantages over Google and is ready to be used today — especially with Google just a !g away.

But web search freedom requires a revolution like that envisioned by YaCy or Seeks. Seeks seems like more of a practical, incremental and realistic solution, but it still depends on proprietary search. YaCy is more of a complete solution, but it’s not clear whether its vision is technically feasible.

I intend to experiment with both of these projects — p2p services need users to improve — and continue to watch this space for new developments.

Creative Commons Attribution-ShareAlike 4.0 International Permalink | Post a Comment

Degooglifying (Part II): Feed Reader

This post is part of a series in which I am detailing my move away from centralized, proprietary network services. Previous posts in this series: email.

Next to email, replacing Google Reader as my feed reader was relatively easy, though I’ve chosen to use the move as an opportunity to clean out my feed subscriptions, rather than doing a straight export/import. I’ve replaced Google Reader with two free software feed readers: Liferea (desktop) and Tiny Tiny RSS (web).

A reading list can be very personal, and it can also be very misleading out of context. For example, my reading list suggests all sorts of things about my religious and political views, about the communities to which I may be connected, etc. Though, it would take some analysis to try and figure out why I subscribe to any particular feed. Is the author’s view one I espouse and whole-heartedly hold as my own? One I find interesting, challenging, or thought-provoking? Or one I utterly disagree with yet want to learn more about?

There is something private about a complete reading list, much like the books you might check out from the library or the videos you might rent from a store. As we get more of this content through the internet, it’s easy for these lists (and even more behavioural data about how we interact with them) to be compiled in large, centralized, proprietary databases, alongside all sorts of other personal information that would not be available to a traditional Blockbuster or public library. Besides the software fredom issues, this is another revealing personal dataset that I can claim more control over by exercising software freedom, rather than dumping it into a big centralized, proprietary database. Both software freedom and privacy issues are at play here.

Desktop Client: Liferea

Liferea is a desktop feed reader for GNU/Linux. Google Reader was my first feed reader, so a desktop feed reader was a bit of an adjustment, but there are a few things I really like about it:

  • Native application: It integrates well with my desktop, with something like Ubuntu’s Messaging Menu, and it’s a client that feels somewhat familiar in GNOME.
  • Control over update frequency: One of the things that bugged me about Google Reader is it constantly checks for new content, whether or not you want it to. Sometimes, I don’t want to see anything new until tomorrow. It’s nice to be able to click update, read, and then let it be until I choose to update again. (Though, the downside is missing material if you don’t update often enough.)
  • Integration with Google Reader / Tiny Tiny RSS: This is a killer feature. You can use Liferea to read feeds through the Google Reader API, and recent versions have added support for a tt-rss backend as well. This helped with my transition because I could use Liferea as a front-end for Google Reader before I was prepared to migrate my feeds, to test it out, to ease the transition, etc. And, I will be able to use Liferea and tt-rss together to have both desktop and web-based clients.
  • Embedded Web Browser: This is also a killer feature. Websites that don’t have full-text feeds and only offer a content snippet are annoying in Google Reader, because you have to leave Reader to see the full content. But, in Liferea, you can tell it to automatically load content for a feed using the embedded web browser instead of just viewing the snippet, or hit enter on any feed entry to load the URL using the embedded browser. It even has basic tabbed browsing support, so you don’t have to flip back and forth between your web browser and your feed reader. This makes reading content from non-full-text feeds easy without leaving Liferea.
  • Integrated Comments: Liferea can detect comment feeds on many blogs, and it shows a handful of comments underneath entries. Combine this with a quick enter key to visit the web page with the embedded browser, and you no longer have to leave the feed reader to participate in the comments. This is a nice step up from the usual isolation of a feed reader from comment threads.
  • Authentication support for protected feeds: This is a useful feature for subscribing to protected content, such as an updates feed on an internal wiki.

I tested Liferea as a Google Reader front end, then migrated subscriptions group by group (giving me a chance to re-organize, though I could have just used an OPML export/import), and once I upgrade to Liferea 1.8, I’ll connect it to tt-rss.

Other Desktop Clients: RSSOwl is a free software, cross-platform (Windows, Mac OS X, GNU/Linux) feed reader, which also has Google Reader integration. I have only tried this briefly, so that I could recommend it to Windows users.

Web Client: Tiny Tiny RSS

Tiny Tiny RSS is a web-based feed reader, similar to Google Reader, but free software that you can run on your own web server. There are some feeds I read all the time, and others I’ll skim or catch up on when I have a chance. For the must-read feeds, it makes a huge difference to be able to read them from my mobile computer. With Google Reader, I used grr, and there is a mobile web interface. I migrated my must-read feeds to tt-rss instead of Liferea so that I’d have easy access to them while away from my laptop, while still having the ability to use Liferea when on my laptop with it’s tt-rss integration. I’m moving more and more feeds into tt-rss, though I plan to leave some less frequently updated, less important feeds or feeds that are difficult to read from my mobile in Liferea only.

Some cool features:

  • Publish articles to shared feed: Google Reader had a shared articles RSS feed, and I’d piped that into blaise.ca. tt-rss has a similar RSS feed, which I’ve also been able to include on my website
  • Mobile web interface: tt-rss has a mobile web interface for webkit browsers powered by iUI. With Macuco on my N900 or the Android web browser, it works quite well — though, only for full-text feeds.
  • Filters: With tt-rss, you can create filters on feeds. So, for example, I am automatically publishing articles from the Techdirt feed that I’ve written, or I can auto-delete posts for a particular series or author that I’m not interested in to custom tailor a feed to my interests. It’s very useful for automating certain actions or reducing noise on a high-traffic feed.
  • Custom CSS: I suppose you could customize Google Reader’s styles with a GreaseMonkey script or something, but tt-rss offers custom CSS overrides and multiple themes out of the box, which is great for setting some more readable default colours.
  • API: tt-rss has an API, which allows for Liferea integration, an Android client, etc.
  • Authentication support for protected feeds: Like Liferea, tt-rss provides support for feeds requiring authentication.

As with Liferea, tt-rss gives me control over how frequently updates run, since I schedule the update job. But that control also comes without the downside of missing content if I’m away from my feed reader for a while; unlike a desktop client that needs to be open to retrieve new content, tt-rss does so in the background from the server, so it can still track new entries while I’m away. It has the benefits of Google Reader’s persistent background updates, while still giving me control over frequency and scheduling. I have the update job set to run a few specific times through the day, and tt-rss gives you the option to set an even longer update interval for any given feed.

While I was initially migrating from Google Reader to Liferea, Tiny Tiny RSS is quickly becoming my primary feed reader, while Liferea will become my primary desktop client for tt-rss and home for less frequent/important/non-full-text feeds.

Other Web Clients: NewsBlur is another web-based, free software feed reader, which is based on a more modern web stack and seems to have some neat features. I have yet to try it, and I’m not sure of the state of its mobile or API/desktop integration, which are two things I really like in tt-rss. It’s worth taking a look at though for sure. NewsBlur.com has a hosted service, if you aren’t able to run your own web server or don’t have a friend who’s running one.

Conclusion

My migration away from Google Reader is essentially complete. I have less than a dozen feeds remaining there, but mostly old or broken feeds. I no longer log into Google Reader to read anything, though I’ve got one more round of cleaning to do to empty my account. I’m currently split between Liferea and tt-rss, but with Liferea 1.8, I’ll be able to integrate the two. I also have other libre options to explore with NewsBlur and RSSOwl.

There is nothing that I miss about Google Reader, and if anything, with an embedded browser, native desktop options, integrated comments, control over update scheduling, feed filters, and authentication support for protected feeds, I have a lot of useful features now that I didn’t have with Google’s proprietary service — nevermind more software freedom and less surveillance.

Creative Commons Attribution-ShareAlike 4.0 International Permalink | Comments (5)

Can Facebook Really Bring About A More Peer-to-Peer, Bottom-Up World?

This post originally appeared on Techdirt

Mark Zuckerberg’s letter to shareholders included in Facebook’s IPO filing contains a pretty bold vision for Facebook to not just connect people and enable them to share, but to fundamentally restructure the way that the world works:

By helping people form these connections, we hope to rewire the way people spread and consume information. We think the world’s information infrastructure should resemble the social graph — a network built from the bottom up or peer-to-peer, rather than the monolithic, top-down structure that has existed to date. We also believe that giving people control over what they share is a fundamental principle of this rewiring.

We have already helped more than 800 million people map out more than 100 billion connections so far, and our goal is to help this rewiring accelerate. [emphasis added]

That sounds pretty lofty, but if you recognize that Facebook provides a social networking service that hundreds of millions of people use — but forget for a moment that it’s Facebook — it’s quite a bold “social mission.” And there are many examples of how the service has been used as a key tool in affecting change on everything from opposition to the Canadian DMCA to the Arab Spring. There’s no doubt that the service makes it easier for people to organize in a more bottom-up way.

But, once you remember that it’s Facebook we’re talking about, the vision sounds more problematic. Could Facebook ever truly bring about a peer-to-peer, bottom-up network? The notion seems to be an inherent contradiction to Facebook’s architecture — as a centralized, proprietary, walled garden social networking service. Facebook may enable a more bottom-up structure, but it’s a bit disingenuous for Zuckerberg to decry a monolithic, top-down structure when Facebook inserts itself as the new intermediary and gatekeeper. As a centralized, proprietary, walled garden service, Facebook is a single point for attacks, control, and surveillance, never mind controversial policies or privacy concerns. Facebook may enable a more bottom-up and peer-to-peer network compared to many things that came before, but there is something fundamentally at odds with a truly distributed solution at the core of its architecture and its DNA.

To realize the full potential of bottom-up, peer-to-peer social networking infrastructure, we need autonomous, distributed, and free network services — the sort of vision that StatusNet/Identi.ca or Diaspora have tried to bring about. Rewiring the world to create a more bottom-up, peer-to-peer network is a bold vision for Zuckerberg to put forth — and one that Facebook has advanced in many ways — yet it’s fundamentally at odds with the reality of Facebook as a centralized and proprietary walled garden.

Read the comments on Techdirt.

Creative Commons Attribution-ShareAlike 4.0 International Permalink | Comments Off on Can Facebook Really Bring About A More Peer-to-Peer, Bottom-Up World?

Degooglifying (Part I): Email

I’ve begun to write about free (libre) network services, and the hazards of being a tenant on the web instead of a property owner. I began slowly moving away from Google in 2009, but I’ve accelerated that process since the launch of Google+. I thought I’d begin to share my process of degooglification.

To be clear, I still generally trust and respect Google, and I do believe they’re generally less evil than most, but…

  1. Despite great support for open source software, they remain a proprietary software company at their core. Google is a friend to open source infrastructure, but not to free (libre) network services. Specifically, it’s the proprietary network services I’m degoogling from.
  2. The sheer amount of data — email, contacts, documents, calendar, RSS feeds, social graph, phone calls, photos, GPS location, nevermind web searches… — aggregated into a one single account with a proprietary service provider is an obviously bad idea. Even if Google never intends to do anything bad with it, they can make mistakes. Even if Google never does anything bad itself, it’s a single vector for attack from an outsider. And it’s not your account.

Email is one of the easiest services from which to degooglify. It’s also a good example of a multi-step transition.

Changing the front-end

The first thing I did was to stop using the Gmail web interface. I configured my Gmail account in Thunderbird, which I was already using for other email accounts. Google’s commitment to data portability often makes it easy to switch your front-end software before switching the back-end, which can make a transition much smoother. Rather than cutting over cold turkey, you can ease into a new interface. My Gmail account is still active, but it rarely sees any important email anymore. I’ve transitioned 99% of my email to other accounts on domains I control (like this one).

Changing the Backend

Gradually, I started using my blaise.ca email addresses instead of my Gmail account, until eventually I wasn’t getting much email through Gmail anymore. With my Gmail account configured in Thunderbird, it was easy to archive the contents on my computer. You can access Gmail labels as IMAP folders and just copy email from one account to another, and Thunderbird will even offer to synchronize a local copy of your Gmail account. I never used Gmail contacts, but an export and import to Thunderbird would get your data out (more on contacts another time). Lastly, I’m still monitoring my Gmail account via Thunderbird, but I could set an auto-reply and/or forwarder if I really wanted to force that last 1% over. I will probably do that eventually.

Other Considerations

There are a few other perks of a Gmail account that are pretty easy to get from libre alternatives:

  • Hosted: Not everyone is going to run their own mail server, or have a friend or family member who does. But there are hosted, libre services, like riseup.net
  • Storage space: in 2004, 1 GB of email was a huge game changer. Today, it’s not very hard to get that kind of storage space on a server for cheap.
  • Chat: Google uses the open standard XMPP for its chat service. I run my own XMPP server, and there are public Jabber services like jabber.org. I’ve simply added my Gmail contacts to my blaise@blaise.ca XMPP account. More on chat another time.
  • Conversations: The Conversations add-on provides Gmail-style conversations inside Thunderbird.
  • Spam filtering: Gmail has a good track record on spam filtering, but SpamAssassin, ClamAV and a greylisting policy can produce great results on your own server nowadays. I don’t get any more spam to my blaise.ca inbox than I do to my Gmail inbox.
  • Webmail: I love Thunderbird, but not everyone wants to use a desktop client, and you’re not always on your own computer. Roundcube is already a great free software webmail client, and it hasn’t even hit 1.0 yet. Many hosting providers already offer Roundcube to their customers.
  • Mobile: With IMAP, my email is easily accessible from and synchronized between Thunderbird, Roundcube, and my mobile computer’s IMAP client.

Email is probably the easiest thing to degooglify. It can be a smooth, gradual transition, and there are lots of good alternatives, as well as benefits from leaving Gmail. Over the next while, I’ll share my ongoing efforts to degooglify other aspects of my online life.

Creative Commons Attribution-ShareAlike 4.0 International Permalink | Comments (8)