I spent a few hours troubleshooting a problem with Thin when I upgraded from Redmine 2.5 to Redmine 3.0 on a Debian Wheezy server. I found a solution that’s worked for me. I’m not confident enough with Ruby and this setup to make a HowTo on the Redmine wiki, but I found next to nothing on this specific problem when searching the web, so I figure it’s important to post this in case it helps anyone else.
I’m running Redmine on Debian Wheezy with Nginx and Thin. I’m also running Redmine on another Wheezy server with Apache (and mod_passenger, I think). The latter upgrade to Redmine 3.0 went fine, but when I ran the same steps on the thin/nginx server, I was getting a Bad Gateway 502 error from nginx and found this in the thin logs.
!! Unexpected error while processing request: uninitialized constant Rack::MethodOverride::REQUEST_METHOD
Yet, when I ran Redmine with webrick (per Redmine’s installation instructions), it worked fine. Since it worked fine with webrick and on my other server, it seemed like the problem was at the Thin layer.
This Stack Overflow issue was the closest I could find, though it was with a different Rack application. The problem was a mismatch between the Rack version and the one required by the application.
I couldn’t find Redmine-specific examples (hence this post), but this one Redmine guide did say “Rack 1.0.1. Version 1.1 is not supported with Rails 2.3.5”.
Finally, the Stack Overflow issue linked to this issue in the Passenger tracker which pointed me towards the answer:
Your system has two Rack versions installed. One is version 1.5.0, installed by APT, and is located in /usr/lib/ruby/vendor_ruby. The other one is version 1.6.0, installed by RubyGems, and is located in /var/lib/gems/2.1.0/gems/rack-1.6.0.
Before Passenger loads your app, Passenger calls require “rack”. Because /usr/lib/ruby/vendor_ruby is in Ruby’s $LOAD_PATH, Passenger loads the Rack 1.5.0 library installed by APT.
However Sinatra requires Rack 1.6.0 or later…
This was my problem. When I installed Redmine 2.5, I ran `apt-get install thin`. It pulled in ruby-rack 1.4.1 as a dependency. This “conflict” wasn’t a problem in Redmine 2.5, which has “rack (~> 1.4.5)” in Gemfile.lock — the versions are close enough. However, Redmine 3.0 has “rack (~> 1.6)” in Gemfile.lock… hence the error I was seeing, as Rack 1.4.1 installed via apt was probably being loaded in place of the 1.6.1 Gem.
I tried to `apt-get remove ruby-rack`, but it was going to remove thin as well. (And I checked the Jessie repos, but its ruby-rack is still only 1.5.2.) I identified two solutions:
To install thin separately, first I removed it and ruby-rack whiling marking a couple other dependencies as manually installed and keeping the /etc/init.d/thin file…
apt-get remove ruby-rack thin
apt-get install ruby-eventmachine ruby-daemons # not sure if this was necessary or advisable, just a guess
Then, following the thin installation instructions, I was able to install the gem:
apt-get install ruby-dev build-essential
gem install thin
# Update the path in /etc/init.d/thin from /usr/bin/thin (apt) to /usr/local/bin/thin (gem)
perl -pi -w -e 's/\/usr\/bin\/thin/\/usr\/local\/bin\/thin/g' /etc/init.d/thin
I was able to start thin again (`service thin start`), but I was getting a new error for which I found the solution here: add thin to your Gemfile.
So, somewhat reluctantly, I opened up Gemfile in the Redmine root directory and in between a couple other gem lines I added the line:
Then, I restarted thin, and Redmine was working again!
Things I don’t like about this solution or am unsure of:
Hopefully this can help anyone else using Redmine(3.0)/Debian/Nginx/Thin seeing that error. I’d be happy to share configuration and fresh installation instructions once I have some confidence that this approach is sane, and in particular once I have a better solution than modifying the Gemfile.
I got into a public fight with IceWeasel/Firefox 30 and the Mozilla sync service on pump.io last month, and was meaning to publish my “fix”… but it was so hacky, I don’t know which part of it actually worked. But, since it’s somewhat time-sensitive during this sync service transition, I figure it’s better to share this incomplete hack than to not.
I recently switched my ThinkPad X60 from Ubuntu to Debian testing. When I tried to set up IceWeasel 30 with the Mozilla sync service, it started prompting me about creating a Firefox account — something I have absolutely no interest in doing (in fact, I was planning on moving my Firefox sync to off Mozilla’s servers to ownCloud).
I discovered that, while previously paired devies would still be able to sync using Mozilla’s old sync service for a limited time, as of Firefox/IceWeasel 30, it no longer supports pairing new devices to the old sync service.
This made me really angry. If I’d set up sync and paired the device before “upgrading” to IceWeasel/Firefox 30, I’d be syncing no problem, but Firefox/IceWeasel 30 refused to allow this. It was an infuriating combination of what felt like an anti-feature, and pressure from Mozilla to sign up for a new sync service that seems worse on the privacy front (e.g. no server-side encryption, and self-hosting is experimental now because you’d also have to self-host the Accounts service…).
Technically, this wasn’t a new device. I’d already had my X60 Firefox set up to sync before I switched from Ubuntu to Debian. So, I managed to trick IceWeasel into letting me sync again.
This was pretty reckless (but stakes very low — brand new IceWeasel profile) and I’m not sure exactly what worked and use these instructions at your own risk, etc etc.:
I think it was something in copying the services.sync* settings that allowed the Pair a New Device screen to work the first time I reopened IceWeasel. Then, after pairing, resetting the timestamps to 0 on the services.sync.*.lastSync* settings caused IceWeasel to download everything again anew.
YMMV. I’m not sure how much my of success depended on being able to hijack an existing client sync ID from a device that was previously configured but no longer being used (i.e. my former Ubuntu Firefox profile on my X60 that I was replacing with Debian IceWeasel). And these steps are vague and unspecific because I’m not really sure what precisely worked or what may be unwise for you to try if you don’t know what you’re doing… but feel free to contact me if you want more specifics on my set up and experience and I may be able to help.
At the very least, this will allow me to continue using the old sync service for now, until I figure out what my options are re: self-hosting, ownCloud, Mozilla’s new Firefox Accounts-based sync service, etc.
As an avid reader of both David Weinberger and Pope Francis, it was very interesting to see those two worlds collide in Weinberger’s cross-tradition interpretation of Pope Francis’ message for World Communications Day.
First, Weinberger looks at Pope Francis’ initial characterization of the Internet:
The internet, in particular, offers immense possibilities for encounter and solidarity. This is something truly good, a gift from God.
Weinberger calls this a “remarkable characterization” compared to all of the other ways Pope Francis could have started:
Not: The Internet is a source of temptations to be resisted. Not: The Internet is just the latest over-hyped communication technology, and remember when we thought telegraphs would bring world peace? Not: The Internet is merely a technology and thus just another place for human nature to reassert itself. Not: The Internet is just a way for the same old powers to extend their reach. Not: The Internet is an opportunity to do good, but be wary because we can also do evil with it. It may be many of those. But first: The Internet — its possibilities for encounter and solidarity — is truly good. The Internet is a gift from God.
While I agree with Weinberger, there is also something that is not fundamentally new. Even just looking at past World Communications Day messages over the past quarter century, Pope Benedict XVI and Pope John Paul II both typically lead with the goodness of the new technology.
“I wish to consider the development of digital social networks which are helping to create a new “agora”, an open public square in which people share ideas, information and opinions, and in which new relationships and forms of community can come into being. These spaces, when engaged in a wise and balanced way, help to foster forms of dialogue and debate which, if conducted respectfully and with concern for privacy, responsibility and truthfulness, can reinforce the bonds of unity between individuals and effectively promote the harmony of the human family. The exchange of information can become true communication, links ripen into friendships, and connections facilitate communion.”
“New horizons are now open that were until recently unimaginable; they stir our wonder at the possibilities offered by these new media and, at the same time, urgently demand a serious reflection on the significance of communication in the digital age. This is particularly evident when we are confronted with the extraordinary potential of the internet and the complexity of its uses.”
“Many benefits flow from this new culture of communication […] While the speed with which the new technologies have evolved in terms of their efficiency and reliability is rightly a source of wonder, their popularity with users should not surprise us, as they respond to a fundamental desire of people to communicate and to relate to each other.”
“Technological advances in the media have in certain respects conquered time and space, making communication between people, even when separated by vast distances, both instantaneous and direct. This development presents an enormous potential for service of the common good and “constitutes a patrimony to safeguard and promote” (Rapid Development, 10).”
“Modern technology places at our disposal unprecedented possibilities for good”
“The extraordinary growth of the communications media and their increased availability has brought exceptional opportunities for enriching the lives not only of individuals, but also of families.”
“For the Church the new world of cyberspace is a summons to the great adventure of using its potential to proclaim the Gospel message.”
“With the recent explosion of information technology, the possibility for communication between individuals and groups in every part of the world has never been greater. Yet, paradoxically, the very forces which can lead to better communication can also lead to increasing self-centredness and alienation. We find ourselves therefore in a time of both threat and promise.”
Even on other technologies… Videocassettes and audiocassettes in the formation of culture and of conscience (Pope John Paul II, 1993):
“Let me say again, and with emphasis, that the audiocassette and the videocassette are gifts of God, gifts, we may say, kept in His treasury through all the ages until our time, kept — for us.”
And even pretty early on in the days of “computer culture” going mainstream… The Christian Message in a Computer Culture (Pope John Paul II, 1990):
“Surely we must be grateful for the new technology which enables us to store information in vast man-made artificial memories, thus providing wide and instant access to the knowledge which is our human heritage, to the Church’s teaching and tradition, the words of Sacred Scripture, the counsels of the great masters of spirituality, the history and traditions of the local Churches, of Religious Orders and lay institutes, and to the ideas and experiences of initiators and innovators whose insights bear constant witness to the faithful presence in our midst of a loving Father who brings out of his treasure new things and old (cf. Mt 13:52).”
Really, what they’re doing here is following the basic pattern of Genesis — first, the goodness of creation, then, the problem of sin.
So, the affirmation of goodness can be traced back to the Church’s earliest proclamations on the Internet and computer culture, in World Communications Day messages as well as in other documents. But, there is something that seems to have shifted about the Papal characterizations of what the Internet is. As Weinberger writes:
The Catholic Church put the “higher” in “hierarchy,” so it’d be understandable if it viewed the Internet as a threat to its power. Or as a source of sinful temptation. Because it’s both of those things. The Pope might even have seen the Internet quite positively as a powerful communication medium for getting out the Church’s message.
While the first two characterizations are more caricatures, the third is not. Certainly in some messages from Pope John Paul II, you can see more of an emphasis on the Internet as a tool for evangelization (though, keeping in mind that evangelization requires dialogue or personal communication and encounter — it’s still not a broadcast approach). It might be said that Pope Benedict XVI picked up on this, but further developed some thinking on relationships, and here Pope Francis picks up and focuses on the Internet as a way of encountering our neighbours. A more thorough analysis of other writings might be required to support that conclusion, but there does seem to be something new in Pope Francis’ emphasis.
As Pope Francis writes (emphasis added):
It is not enough to be passersby on the digital highways, simply “connected”; connections need to grow into true encounters. We cannot live apart, closed in on ourselves. We need to love and to be loved. We need tenderness. Media strategies do not ensure beauty, goodness and truth in communication. The world of media also has to be concerned with humanity, it too is called to show tenderness. The digital world can be an environment rich in humanity; a network not of wires but of people. The impartiality of media is merely an appearance; only those who go out of themselves in their communication can become a true point of reference for others. Personal engagement is the basis of the trustworthiness of a communicator. Christian witness, thanks to the internet, can thereby reach the peripheries of human existence.
As Weinberger puts it (emphasis added):
For the Pope, the Internet is an opportunity to understand one another by hearing one another directly. This understanding of others, he says, will lead us to understand ourselves in the context of a world of differences […]
If we frame the Internet as being about people being human to one another, people being neighbors, the differences in belief are less essential and more tolerable. Neighbors manifest love and mercy. Neighbors find value in theirs differences. Neighbors first, communicators on occasion and preferably with some beer or a nice bottle of wine.
Neighbors first. I take that as the Pope’s message, and I think it captures the gift the Internet gives us. It is also makes clear the challenge. The Net of course poses challenges to our souls or consciences, to our norms and our expectations, to our willingness to accept others into our hearts, but also a challenge to our understanding: Stop thinking about the Net as being about communication. Start thinking about it as a place where we can choose to be more human to one another.
That I can say Amen to.
After some strange behaviour in gPodder 2.20.3 yesterday on my N900 (not responding to episode actions), I quit gPodder and tried to start it up again, but it would crash during startup everytime with an error about “database disk image malformed” from line 316 of dbsqlite.py on the query: “SELECT COUNT(*), state, played FROM episodes GROUP BY state, played”.
First, I opened up the sqlite database directly:
I could run that query and others no problem.
However, I found this guide on repairing a corrupt sqlite database, I ran the following integrity check command and it returned a couple errors along with the “database disk image malformed” message:
sqlite> pragma integrity_check;
So I followed the instructions from spiceworks, dumped my database to file and reloaded it into a new database:
echo .dump | sqlite3 database.sqlite > gpodder.sql # generate dump file
mv database.sqlite database.sqlite.bak # backup original database
sqlite3 -init gpodder.sql database.sqlite # initialize a new database from the dump file
And, voila, gPodder is working again.
Finding a replacement for Google Calendar has been one of the most difficult steps so far in my degooglification process, but in the end I’ve found a bunch of great, libre alternatives.
Beyond the basic criteria for free network services, I was looking for:
I started with SyncML, an open standard for syncing calendar and contact data. SyncEvolution is a great SyncML client, with both GUI and command line tools available for GNOME and Maemo GNU/Linux, and Funambol is an AGPL SyncML server, with an Android client.
I setup Funambol and migrated from Google Calendar in July 2011, using SyncEvolution on my N900 and my laptop, but there were a bunch of problems. It was unstable around the edges, not handling deletes very well, and sometimes choking and failing with certain characters ( ” maybe?) in event titles. When I tried to switch my parents over in Android, it was a nightmare trying to figure out where the sync was failing, and they eventually moved to Google Calendar instead. SyncEvolution only syncs with Evolution on the desktop; there’s no mature SyncML solution for Lightning. The Funambol free software edition felt like a bit of an afterthought as well, with poor or outdated documentation, and a crippled, totally useless “demo” web UI. There was no calendar sharing or access controls either. Plus, Funambol is a pretty heavy application, targeted at mobile carriers, not someone who wants to run it from their living room.
SyncML with Funambol and SyncEvolution allowed me to leave Google Calendar behind, but I ended up living off my mobile calendar, using Funambol essentially as a backup service. I had no web client, no shared calendars, and eventually stopped syncing to Evolution on my laptop. Part of the problem was Funambol, but part of the problem was also SyncML, which seems to be a clunky standard, designed for an older paradigm of syncing with offline mobile clients.
I quickly realized that CalDAV was the better open standard.
CalDAV is an extension of WebDAV, an internet standard for remote access to calendar data. It’s a more modern standard that SyncML — though SyncML does have better support on older mobile devices. (There’s also CardDAV for contacts, but I’ll leave that for a future post.)
However, there are a ton of CalDAV servers.
Here are my favourites so far:
|Works with anything via connectors; well-integrated with Thunderbird/Lightning, and web UI modelled after Lightning; Ubuntu/Debian repos||UI isn’t super pretty; comes with a webmail client I don’t want; heavy, took some effort to install (e.g. made a custom MySQL user auth table, in the absence of an LDAP server)|
|Very alive; support for contacts, photos, music, etc.; Ubuntu/Debian repos|
|Radicale||Simple, elegant, light-weight||For sysadmins: no UI|
I tried a few others, but I wouldn’t recommend them:
Others I didn’t bother to try:
I’m using SOGo. Though, that’s partially because it was the most comprehensive solution that I had working at the time when my wife went back to work after maternity leave and we needed sharable calendars again to coordinate scheduling for childcare. But SOGo also has some nice, more advanced features, like the ability to subscribe to remote CalDAV feeds on other servers through the web UI. I’m pretty happy with SOGo, though I’ll certainly be revisiting ownCloud and Radicale at some point. When I first tried ownCloud, it was immature, but it’s since grown a lot. And when I first tried Radicale, it was using a “strange” ACL model, but that’s been overhauled in 0.8. DAViCal was working, though it wasn’t a pleasure to configure, and I’m sure there are a few other workable servers I passed over.
Since I’m a Thunderbird/IceDove user, Lightning is the obvious choice for a desktop client. We also use Thunderbird at the office and in my family. Lightning also supports Google Calendar, so just like with degooglifying email, you can switch your frontend and backend in separate steps.
The Evolution calendar is pretty awkward. I tried it when I was using SyncML, but it didn’t last long. There are other options too.
Maemo: The reason I spent so much time on SyncML was that there was no CalDAV client for Maemo, but now SyncEvolution supports CalDAV/CardDAV sync!
aCal is an Android CalDAV client, and a replacement for the proprietary Google calendar application. It works really well, but the UI feels really awkward and non-native. [ There’s also CalDAV-Sync, which I’d skipped over because it’s proprietary, but maiki pointed out that the developer at least intends to open source it eventually. I’m not sure if the Android Calendar is free software or one of the proprietary “Google experience” apps?] Both sync to local storage for offline support.
It took me a long time to figure this out, especially since I was focused on SyncML at first, but I’ve finally fully replaced Google Calendar with CalDAV solutions. SOGo, ownCloud and Radicale are all great CalDAV servers. SOGo and ownCloud have built-in web clients, but there’s also CalDavZap as a standalone web client. Lightning is the obvious cross-platform desktop CalDAV client of choice, and SyncEvolution and
aCal provide mobile clients for Maemo and Android.
The good news is there are plenty of options. As a bonus, most of these come with CardDAV support (which will be the focus of a future post), and ownCloud handles photos, music, and other files as well, so you may get more than just a calendar. Or, if it’s just a calendar you want, light-weight solutions like Radicale and CalDavZap give you just that.
I’m just thrilled to have finally figured this out.
When I moved to Maemo in 2010, I was using Google Calendar. I setup a sync via Exchange and eventually Erminig, which allowed me to sync my wife’s Google calendar too. But, when I started degooglifying and moving to free network services, I left Google Calendar for Funambol, using SyncEvolution as a Maemo SyncML client.
This was far from ideal: we lost shared calendars, there was no web UI, and desktop SyncML options were lacking. I quickly realized that CalDAV would be the better long-term option. I choose SOGo as my CalDAV server, but I couldn’t find a CalDAV client for the N900. (I tried the Funambol SOGo Connector. but just couldn’t figure it out.)
I’d just about given up on a comprehensive sync solution in Maemo… until I hit the jackpot a few days ago and stumbled upon a post by Thomas Tanghus on a CalDAV/CardDAV sync from the N900 to ownCloud using SyncEvolution.
It looks like SyncEvolution gained CalDAV/CardDAV support in version 1.2 — the N900 has a CalDAV client!
Thomas’ instructions were for ownCloud, but they work for any CalDAV/CardDAV server. I only ran into two issues, I think because I’d been using SyncEvolution pre-1.2. The steps included here are 90% from Thomas, with those two additions.
First, I ran into the same problem as Wolfgang: the SyncEvolution WebDAV template wasn’t there when I tried to run Thomas’ first step. Wolfgang’s solution worked for me as well: just uninstall and reinstall SyncEvolution.
# apt-get remove syncevolution syncevolution-frontend
# apt-get install syncevolution syncevolution-frontend
I suspect you’ll need to do this if you initially installed SyncEvolution before it included WebDAV support.
After reinstalling, I was successfully able to follow Thomas’ instructions (ignore the “backend failed” notices in the first command):
syncevolution --configure --template webdav username=YOURUSERNAME password=YOURPASSWORD target-config@sogo
syncevolution --configure database=CALDAVURL backend=caldav target-config@sogo calendar
syncevolution --configure database=CARDAVURL backend=carddav target-config@sogo contacts
The CalDAV URL for your default SOGo calendar is http://YOURSOGOINSTALL/dav/YOURUSERNAME/Calendar/personal and the CardDAV URL for your default SOGo addressbook is http://YOURSOGOINSTALL/dav/YOURUSERNAME/Contacts/personal. Your can right-click on any additional calendars in SOGo and select Properties > Links to find the CalDAV link for that particular calendar.
I ran into another issue with the next step in Thomas’ instructions. The above commands created new configuration files in /home/user/.config/syncevolution/sogo/, but the following commands operate on /home/user/.config/syncevolution/default/, in which I already had existing, older SyncEvolution configuration files. SyncEvolution complained about my pre-existing configuration, probably because I’d installed a much earlier version of SyncEvolution, and it said that I’d need to “migrate” with the following command:
syncevolution --migrate '@default'
Again, I suspect you’ll need to run this if you’d installed SyncEvolution pre-1.2. After this, I was able to continue with Thomas’ instructions.
In the following command, the username/password should stay blank:
syncevolution --configure --template SyncEvolution_Client sync=none syncURL=local://@sogo username= password= sogo
Then, configure the databases, backend and sync mode for calendar and contacts:
syncevolution --configure sync=two-way backend=calendar database=N900 sogo calendar
syncevolution --configure sync=two-way backend=contacts database=file:///home/user/.osso-abook/db sogo contacts
I’m running SSL on my server, so I had to add this step to get past an SSL error:
syncevolution --configure SSLVerifyServer=0 target-config@sogo
(I bet there’s a way to configure it to properly verify the SSL certificate… but I’ll save that for another day.)
To test the configuration:
syncevolution --print-items target-config@sogo calendar
syncevolution --print-items target-config@sogo contacts
If that shows the data you expect to be there, then go ahead and run your first sync.
SyncEvolution has several sync modes. The above commands configured the default mode to be ‘two-way’, but if you have initial data on both your client and server, you’ll want to run a ‘slow’ sync first.
syncevolution --sync slow sogo
My initial slow sync took almost an hour for ~2540 calendar events and ~160 contacts.
(If you want to overwrite your client with data from the server, or vice versa, look up ‘refresh-from-client’ or ‘refresh-from-server’ instead of ‘slow’.)
After that initial sync, you can run a normal sync at anytime:
While the command line is great for configuration and testing, you don’t want to open a terminal every time you want to sync your calendar. You could schedule the sync command via fcrontab, but the Maemo syncevolution-frontend GUI has a daily scheduler.
First, in the Calendar application, under Settings > Calendars, I created one for my wife’s calendar called “Heather” and one for my son’s calendar called “Noah.”
You can view all the available databases with the follow command:
You should see your new calendar listed here. It can be used by name, so long as that name is unique (and there aren’t any special characters to escape).
Then, adapting the above commands:
syncevolution --configure --template webdav username=MYUSERNAME password=MYPASSWORD target-config@sogoheather
syncevolution --configure database=HEATHERCALDAVURL backend=caldav target-config@sogoheather calendar
syncevolution --configure --template SyncEvolution_Client sync=none syncURL=local://@sogoheather username= password= heather@heather
# A one-way sync is fine here, because I just want to view my wife's calendar
syncevolution --configure sync=one-way-from-remote backend=calendar database=Heather heather@heather calendar
syncevolution --configure SSLVerifyServer=0 target-config@sogoheather
syncevolution --print-items target-config@sogoheather calendar
# no need for a first slow sync with one-way mode set
syncevolution --configure --template webdav username=MYUSERNAME password=MYPASSWORD target-config@sogonoah
syncevolution --configure database=NOAHCALDAVURL backend=caldav target-config@sogonoah calendar
syncevolution --configure --template SyncEvolution_Client sync=none syncURL=local://@sogonoah username= password= noah@noah
syncevolution --configure sync=two-way backend=calendar database=Noah noah@noah calendar
syncevolution --configure SSLVerifyServer=0 target-config@sogonoah
syncevolution --print-items target-config@sogonoah calendar
# refresh-from-remote is faster than slow, and I know the local calendar is empty
syncevolution --sync refresh-from-remote noah
YMMV and you may want different configuration for your additional calendars, but this should give you some examples for how to configure additional calendars. The key different in these commands, besides the straight replacements, is to add a unique source name to all the –configure commands from SyncEvolution_Client on (except the SSL fix for the target-config), so that the client config ends up distinct from your primary calendar above.
Lastly, using the syncevolution-frontend, I scheduled daily automatic syncs for these two calendars as well, at different times.
I’m not sure if there’s a more elegant/concise configuration. I’m curious if there’s some way to combine the ‘target-config’ and ‘sogo’ steps… but Thomas spent over 12 hours on this and it works, so I’m not going to mess with it. I’m just thrilled that I’ve got this up and running.
After more than a decade in proprietary software slavery, and nearly two years of wandering in the calendar/contacts desert, I’ve finally reached the promised land of seamless and libre mobile, web and desktop calendar/contact sync. [Edit: Almost: The Maemo calendar application is proprietary…] Thank you, Thomas!
I’ve always had mixed feelings about Canonical, the company behind Ubuntu GNU/Linux. While they’ve made great contributions to free software, they’ve also been very inconsistent in their commitment to software freedom. Mark Shuttleworth’s response to the privacy concerns in Ubuntu 12.10 has fundamentally shattered my trust.
From restricted drivers to Launchpad to non-free documentation licences, there have always been concerns about Canonical’s commitment to free software. By 2010, the issues were becoming more serious. Ubuntu used to clearly warn users about restricted drivers, but in the Ubuntu Software Center, no longer is proprietary software merely tolerated, but now it’s celebrated and actively promoted. The average user doesn’t interact with Launchpad, but with Ubuntu One, Canonical’s proprietary service, users must delete, disable or ignore all of the places where it’s built-in to the Ubuntu experience. The concerns were starting to affect my everyday use.
But, I didn’t leave. I uninstalled the Ubuntu One packages, and ignored the Software Centre. Though, I did start exploring my options, with a Debian dual-boot and Trisquel in a virtual machine. However, there are many things that I do like about Ubuntu. My Ubuntu install is still 99% free software. Despite the controversy over the design process and community engagement, there are many things I like about the Unity — the current obsession of Canonical’s founder, Mark Shuttleworth. I appreciate the outcome of his previous obsession as well — Ubuntu’s release cycle works really well. And, maybe there’s some sentiment — I’ve been running the same Ubuntu GNU/Linux install, across three different computers, since I first left Windows in 2007.
In 2010, my relationship with Ubuntu became uneasy, but it didn’t end. I’m not sure I can say the same for 2013.
The EFF, RMS and this tongue-in-cheek bug report provide a decent summary the issue: Ubuntu 12.10 raises serious privacy concerns by reporting searches in the Unity Dash — which have traditionally been local searches — to Amazon, relayed through Canonical.
That Ubuntu screwed up is obvious — at the very least, by enabling this by default. But it’s more than the mistake; it’s the response. In defending the decision, Mark Shuttleworth writes:
We are not telling Amazon what you are searching for. Your anonymity is preserved because we handle the query on your behalf. Don’t trust us? Erm, we have root. You do trust us with your data already. You trust us not to screw up on your machine with every update. You trust Debian, and you trust a large swathe of the open source community. And most importantly, you trust us to address it when, being human, we err.
This doesn’t build my trust; this shatters it. I did not switch to a free software operating system to have the overlords flaunt their control over my computer. Canonical has done many annoying and prioprietary things in the past, but “Erm, we have root” is antithetical to the very notion of software freedom. Ubuntu does not have root access on my machine, nor does Canonical have access to my data. Yes, I must trust the Ubuntu project every time I run updates on my system, but this is a relationship and responsibility to be handled delicately, transparently, respectfully — not a position of power to be flaunted. I trust Ubuntu to maintain the software on my computer. That I trust Ubuntu to provide my system with security updates and bug fixes does not in any way give them licence to do other things, like relay my Dash searches to a third-party through a proprietary network service.
To make matters worse, Mark Shuttleworth recently referred to “who rant about proprietary software” as “insecure McCarthyists.” In response to a question about “decisions that have been less than popular with the Free-software only crowd,” Shuttleworth writes:
Well, I feel the same way about this as I do about McCarthyism. The people who rant about proprietary software are basically insecure about their own beliefs, and it’s that fear that makes them so nastily critical. […]
If you think you’ll convince people to see things your way by ranting and being a dick, well, then you have much more to learn than I can possibly be bothered to spend time teaching.
Aside from the pot-kettle-black nature of his tone, this does not build my trust in Canonical.
These responses strike at very heart of my decision to use GNU/Linux — software freedom. Canonical has never consistently cared about software freedom, but their offences and missteps have come closer and closer to my everyday computing. Now, a serious violation of privacy is brushed aside dismissively because I should just trust Ubuntu and Canonical because “erm, we have root,” and to raise concerns about proprietary software is akin to “McCarthyism.”
No, Mr. Shuttleworth, you don’t have root. The fact that you think you do makes me want to move far away from Ubuntu.
I would rather not leave Ubuntu. I don’t take the decision lightly. But developments over the past few years have made me very uneasy, and Shuttleworth’s attitude has shattered any trust I ever had in Canonical. Even if Ubuntu fixes this particular problem, I’m not sure what can be done to rebuild trust.
At the very least, I’m preparing an exit strategy:
I’ve been patient through many Canonical missteps, and I’ve defended the Ubuntu project over the years. But the “erm, we have root” response shatters my trust in any Shuttleworth-run endeavour. It’s antithetical to the reason I switched to GNU/Linux — software freedom — and I’ll switch again if that’s what it takes.
I’ve always had mixed feelings about Canonical, the company behind Ubuntu GNU/Linux. While they’ve made great contributions to free software, they’ve also been very inconsistent in their commitment to software...
Of all Google services, you’d think the hardest to replace would be search. Yet, although search is critical for navigating the web, the switching costs are low — no data portability issues, easy to use more than one search engine, etc. Unfortunately, there isn’t a straightforward libre web search solution ready yet, but switching away from Google to something that’s at least more privacy-friendly is easy to do now.
In on sense, degooglifying search is easy: use DuckDuckGo. DuckDuckGo has a strong no-tracking aproach to privacy. The !bang syntax is awesome (hello !wikipedia), the search results are decent (though I still often !g for more technical, targeted or convoluted searches), it doesn’t have any search-plus-your-world nonsense or whatever walled garden stuff Google has been experimenting with lately, and it’s pretty solid on the privacy side. After just a few days, DuckDuckGo replaced Google as my default search engine, and my wife has since switched over as well.
The switch from Google Search to DuckDuckGo is incredibly easy and well worth it. If you’re still using Google Search, give DuckDuckGo a try — you’ve got nothing to lose.
But… DuckDuckGo isn’t a final destination. Remember: the point of this exercise isn’t for me to “leave Google,” but to leave Google’s proprietary, centralized, walled gardens for free and autonomous alternatives. DuckDuckGo is a step towards autonomy, as web search sans tracking, but it is still centralized and proprietary.
A libre search solution calls for a much bigger change — from proprietary to free, from centralized to distributed, from a giant database to a peer-to-peer network — not just a change in search engines, but a revolution in web search.
Last summer, I ran a search engine out of my living room for a few months: YaCy — a cross-platform, free software, decentralized, peer-to-peer search engine. Rather than relying on a single centralized search provider, YaCy users can install the software on their own computers and connect to a network of other YaCy users to perform web searches. It’s a libre, non-tracking, censorship-resistant web search network. The problem was that it wasn’t stable or mature enough last summer to power my daily web searches. I intend to install it again soon, because as a peer-to-peer effort it needs users and usage in order to improve, but an intermediate step like DuckDuckGo is necessary in the meantime.
Although YaCy is designed to be installed on your own computer, there is a public web search portal available as a demo.
Seeks is another interesting project that takes a different approach to web search freedom. Seeks is “an open, decentralized platform for collaborative search, filtering and content curation.” As far as I understand, Seeks doesn’t replace existing search engines, but it adds a distributed network layer on top of them, giving users more control over search queries and results. That is, Seeks is a P2P collaborative filter for web search rather than a P2P indexer like YaCy. Rather than replacing web indexing, Seeks is focused on the privacy, control, and trust surrounding search queries and results, even if it sits on top of proprietary search engines.
Seeks also has a public web search portal (and DuckDuckGo supports !seeks). As you can tell, its results are much better than YaCy’s, but Seeks is tackling a smaller problem and still relying on existing search engines to index the web.
DuckDuckGo, though proprietary and centralized, provides some major privacy advantages over Google and is ready to be used today — especially with Google just a !g away.
But web search freedom requires a revolution like that envisioned by YaCy or Seeks. Seeks seems like more of a practical, incremental and realistic solution, but it still depends on proprietary search. YaCy is more of a complete solution, but it’s not clear whether its vision is technically feasible.
I intend to experiment with both of these projects — p2p services need users to improve — and continue to watch this space for new developments.
Next to email, replacing Google Reader as my feed reader was relatively easy, though I’ve chosen to use the move as an opportunity to clean out my feed subscriptions, rather than doing a straight export/import. I’ve replaced Google Reader with two free software feed readers: Liferea (desktop) and Tiny Tiny RSS (web).
A reading list can be very personal, and it can also be very misleading out of context. For example, my reading list suggests all sorts of things about my religious and political views, about the communities to which I may be connected, etc. Though, it would take some analysis to try and figure out why I subscribe to any particular feed. Is the author’s view one I espouse and whole-heartedly hold as my own? One I find interesting, challenging, or thought-provoking? Or one I utterly disagree with yet want to learn more about?
There is something private about a complete reading list, much like the books you might check out from the library or the videos you might rent from a store. As we get more of this content through the internet, it’s easy for these lists (and even more behavioural data about how we interact with them) to be compiled in large, centralized, proprietary databases, alongside all sorts of other personal information that would not be available to a traditional Blockbuster or public library. Besides the software fredom issues, this is another revealing personal dataset that I can claim more control over by exercising software freedom, rather than dumping it into a big centralized, proprietary database. Both software freedom and privacy issues are at play here.
Liferea is a desktop feed reader for GNU/Linux. Google Reader was my first feed reader, so a desktop feed reader was a bit of an adjustment, but there are a few things I really like about it:
I tested Liferea as a Google Reader front end, then migrated subscriptions group by group (giving me a chance to re-organize, though I could have just used an OPML export/import), and once I upgrade to Liferea 1.8, I’ll connect it to tt-rss.
Other Desktop Clients: RSSOwl is a free software, cross-platform (Windows, Mac OS X, GNU/Linux) feed reader, which also has Google Reader integration. I have only tried this briefly, so that I could recommend it to Windows users.
Tiny Tiny RSS is a web-based feed reader, similar to Google Reader, but free software that you can run on your own web server. There are some feeds I read all the time, and others I’ll skim or catch up on when I have a chance. For the must-read feeds, it makes a huge difference to be able to read them from my mobile computer. With Google Reader, I used grr, and there is a mobile web interface. I migrated my must-read feeds to tt-rss instead of Liferea so that I’d have easy access to them while away from my laptop, while still having the ability to use Liferea when on my laptop with it’s tt-rss integration. I’m moving more and more feeds into tt-rss, though I plan to leave some less frequently updated, less important feeds or feeds that are difficult to read from my mobile in Liferea only.
Some cool features:
As with Liferea, tt-rss gives me control over how frequently updates run, since I schedule the update job. But that control also comes without the downside of missing content if I’m away from my feed reader for a while; unlike a desktop client that needs to be open to retrieve new content, tt-rss does so in the background from the server, so it can still track new entries while I’m away. It has the benefits of Google Reader’s persistent background updates, while still giving me control over frequency and scheduling. I have the update job set to run a few specific times through the day, and tt-rss gives you the option to set an even longer update interval for any given feed.
While I was initially migrating from Google Reader to Liferea, Tiny Tiny RSS is quickly becoming my primary feed reader, while Liferea will become my primary desktop client for tt-rss and home for less frequent/important/non-full-text feeds.
Other Web Clients: NewsBlur is another web-based, free software feed reader, which is based on a more modern web stack and seems to have some neat features. I have yet to try it, and I’m not sure of the state of its mobile or API/desktop integration, which are two things I really like in tt-rss. It’s worth taking a look at though for sure. NewsBlur.com has a hosted service, if you aren’t able to run your own web server or don’t have a friend who’s running one.
My migration away from Google Reader is essentially complete. I have less than a dozen feeds remaining there, but mostly old or broken feeds. I no longer log into Google Reader to read anything, though I’ve got one more round of cleaning to do to empty my account. I’m currently split between Liferea and tt-rss, but with Liferea 1.8, I’ll be able to integrate the two. I also have other libre options to explore with NewsBlur and RSSOwl.
There is nothing that I miss about Google Reader, and if anything, with an embedded browser, native desktop options, integrated comments, control over update scheduling, feed filters, and authentication support for protected feeds, I have a lot of useful features now that I didn’t have with Google’s proprietary service — nevermind more software freedom and less surveillance.