Be just as suspicious of your news providers as you are about your software providers

As a human who occasionally gets a giggle out of some news articles, I riff on Phoronix sometimes for the ‘sensationalist’ journalism on my G+ feed. While the news site can occasionally get excited for mundane developments, one important detail is that Phoronix doesn’t intentionally misrepresent issues. I may riff on Phoronix, but I genuinely trust their news as the baseline for reliable information, and if Larabel notices an inconsistency he’s quick to update his articles.

I can’t say the same for Rick Falkvinge and his “Privacy News Online” website.

I got linked to an article from OSNews and was absolutely disgusted by the amount of distortion in the article, which goes beyond sensationalism and straight into damaging and slanderous territory. I may titter at Phoronix once in a blue moon, but “PNO” actively made me sick.

So, what’s the article about?

Google provides a service binary for its “O.K. Google” voice search functionality, this package is downloaded by Chrome as a post-installation package. Open-source Chromium builds download the module the same manner. The function of the voice search module is to listen for a key phrase and transmit voice snippets to Google for interpretation, ultimately so the user may use a reliable voice-search mechanism. Despite being downloaded voice search is not activated by default.

The Pirate Party founder behind the article took offence that an open-source Chromium browser will download the binary blob which provides the service, and I imagine mouth-still-frothing decided the only way to solve this problem was to slam the Googley browsers through a litany of litigation-worthy libel.

Paraphrased, or rewritten? Rewritten.

The most egregious part of the article is a portion which “obviously paraphrases” a Google rep, and it offends me as a thinking person. The paraphrased content makes Google out like a villain ready to tie people to train tracks, wilfully rewriting the statements from a bug report to make it as draconian as possible.

The paraphrased content is a copy->paste away from landing in the article as-is, and when you read the original texts it’s all quite reasonable – my paraphrasing of it goes;

There’s a binary voice-search module which will be downloaded, but it’s not enabled by default, and you can specifically tell Chromium not to download it. We think voice is pretty important, so we give it to you by default and treat it as part of the core browser, but you need to enable it yourself for privacy reasons.

But the “paraphrased” cartoon-villain version of the same text from “Privacy News Online” would make the NSA blush. Below is snippets copied-and-pasted from the article, cropped for brevity. Though I added exclamation marks because I feel it’s more appropriate punctuation for the ridiculousness;

Yes! We’re downloading and installing a wiretapping black-box to your computer! We did take advantage of our position as trusted upstream to stealth-insert code into open-source software that installed this black box onto millions of computers! Yes, Chromium is bypassing the entire source code auditing process by downloading a pre-built black box onto people’s computers. But that’s not something we care about, really! Yes, we deliberately hid this listening module from the users! We don’t want to show all modules that we install ourselves!
MUAHAHAHA!

(I also added the “Muahahaha”, sorry. It’s too ridiculous)

We must defend ourselves against features!

The article goes on to say that, because companies force these terrible optional binary features on us, that we need to start getting all kinds of tin-foil-hat crazy with our electronics.

Fun fact: I used to work in a call-centre troubleshooting mobile phones. My favourite call ever was from an individual who wrapped is battery in tin-foil so “the government couldn’t listen”

His first point is that people will “downplay the alarm”; Oh, you bet. He questions how it knows ‘OK Google” was spoken, implying that everything you say is always transmitted. There’s two problems with this point; he skipped the part where you need to enable it, and simple math dictates that even the great Google can’t beam millions of simultaneous voice-streams to their servers perpetually.

His next point is that it is a big deal for the same reasons as point #1; that Google is slurping up physically impossible amounts of bandwidth listening to millions of people across the world. He adds that maybe there’s keywords embedded in the software which Google is listening for, so every time you mention artichoke, broccoli, cauliflower, or dates Google will secretly log your love of vegetables or hot singles looking for a night out. One of the two.

Then he questions why it’s “opt-out”. Protip: when something isn’t enabled by default, it’s opt-in. But I get it! He wants all binary things in Chromium to be opt-in, not wanting binary components near his open-source sauce; but that’s a build issue, and if someone is building Chromium it puts that person in an entirely different league than someone who just wants their browser to work.

Lastly, he says the inverse of the previous argument which I just said; he states it’s opt-in except for having the binary component, but then implies “we don’t really know that for sure! It could still be running! Google could be downloading different spyware!”. This argument annoys me because that’s not how computers work; you can have the most malicious executable on your hard-drive, but it’s inert until you run it. I could have “babyeater.o” sitting on my computer right now, but until I choose to run it, it’s nothing. His entire argument here hinges on the idea that “Google put a binary service onto my computer, and they could secretly run it on my computer!”; but they aren’t. Google isn’t stupid. If they tried that Google would stand to lose billions of dollars in an international class-action lawsuit. If they say it’s “opt-in” it’s going to be opt-in, and just because it isn’t obvious doesn’t mean it’s hidden. Chrome and Chromium have a multitude of features, and for obvious reasons Google isn’t going to add a 12-part setup wizard to Chromium so every user can make decisions about highly technical aspects of a web-browser.

Finally, the cherry on the top is the article advocating all computer peripherals should have physical on/off switches. But! Companies are EVIL! DANGEROUS! WILLING TO DO THINGS WHICH WOULD GET THEM SUED! What if these evil companies put out webcams and microphones which simply had dummy on/off switches? Clearly, hardware manufacturers are above snooping. At his level of paranoia, there’s a much easier solution than making the hardware industry include physical switches for everything: unplug the damn devices. I mean, it’s common knowledge that many computer systems are vulnerable to remote tapping – and they don’t even turn on the “recording” LED on webcams. If you’re going to be paranoid, at least be paranoid *all the way*.

Should you don the tinfoil headgear?

I advocate crazy people. Crazy people let us know we’re all still sane, and sometimes crazy people find out crazy things or point out issues which should have been crazy obvious. People like Richard Stallman who are clearly insane are necessary, because they pull the whole baseline in a focused direction. They’ll more readily call out things which are on the verge of becoming dangerous. I enjoy people who are constructively crazy. Richard Stallman brought us wonderful ideal open-source licences, putting his brand of crazy in the “awesome” end of the spectrum.

But then you get people like Rick Falkvinge. Rick is being crazy too, but he’s not being constructive. I don’t like Rick. His article could have been incredibly informative; he could have taught us how Chromium works, what it’s doing, why it’s doing it, and how to make an informed choice.

Freedom is fake if your choices are based on lies. Choices aren’t real when you’re not informed. Decisions aren’t your own when someone scares you into them. It’s manipulation.

When I read article like “Google Chrome Listening In To Your Room Shows The Importance Of Privacy Defense In Depth” I get angry because of how it portrays the issue; it’s doing a disservice to his readers because they will not have an informed choice. The article is manipulating its users into thinking voice search is an evil scheme by a faceless behemoth.

I like Google – but while I’m cautious about my Google intake – they still provide high quality services and set a reasonable expectation about how they use my data. What if a handicapped user read his article? Or a friend of a handicapped person? What if that person who could have benefited from voice search thought it was malicious spyware, instead of knowing what it was really all about? I may never use voice search, but I think it’s a very reasonable inclusion provided in a way that minimises hoops for interested users.

In the end, I guess this all goes to say that we also need to look into our news sources; Rick Falkvinge doesn’t seem to be making any effort to provide valuable information, instead preferring to force klaxxon onto readers based on pre-conceived notions. So when you open up an article, keep in mind that authors can be biased just as much as software can be dangerous.

Now that you’ve finished my article on it, please, ponder what I’ve said and question what biases I have. Do some research on the topic – Google it. Come to a clear understanding, and make a real choice.

Fiber is DEAD! Long live Fiber!

So, over on G+ I had it pointed out to me that my multi-process work was… Pointless. It really kinda killed Fiber as it was. Apparently WebEngine covered the bases I was running at. While it was super-sad to see my work circle down the toilet bowl (along with my ‘hacker cred’!) it was also a bit of a relief; not only do I get it for free, but it would have been done better and more robustly than I could have.

The main thing it changed was the fact that anything I would do from that point forward could either be pulled or applied to existing browsers, raising the question; should I fork, contribute, or still start from scratch? I knew I had 3 goals;

  • Deep KDE technology integration.
  • Present a polished, stable, modern experience.
  • Be simple by default, powerful when needed.

And a few ‘key’ features;

  • Be Multi-process CLEAR!
  • Extensions over hardcoded features
  • Custom per-tab profiles

I looked to see what was around, and there are 4 browsers which I could work with;

Qupzilla

Qupzilla isn’t KDE-specific, and I their thing is cross-platform. I very specifically want to target a KDE-oriented browser. Additionally it’s still on Qt4, I want a Frameworks 5 oriented browser. Apparently they’re on Qt5!

Even with the Qt5 version, I’d still be taking an uncomfortable number of liberties with their goals. Also, the plan seems to be a QML-based browser – Qupzilla is out.

Rekonq

Rekonq is currently ~the~ KDE browser, it has the fullest KDE integration, and is established.

The major downside is the fact that Rekonq is an inactive project; it has been stagnant for over a year. Additionally, despite being a browser based on KDE tech, Firefox has supplanted it as the default browser on even the most vanilla distributions.

I’ve been looking at the code and there are some worrying mistakes; I won’t pick it apart, but I spotted a few things that concerned me. Additionally, like QupZilla, it’s still on Qt4 and needs to be ported to Frameworks 5.

Generally, I feel Rekonq has a large amount of baggage; it needs to be ported to Qt5, it has some outstanding issues, and it seems the developers have lost interest… Rekonq is out.

QT Demo Browser

QT Demo Browser is what Rekonq was based off long ago. Since it’s just a demo, basing off it would be a new project; but it is Qt5, and the most ‘vanilla’ of the bunch.

Qt Demo is, well, a demonstration. It has valuable up-to-date code, and is somewhat no-frills. But, because it’s just a demo it also lacks many subtle features that make modern browsing pleasant. It’s also built like a traditional application, so it has a structure that looks more like IE6 than a modern browser.

Qt Nano Browser

Qt Nano Browser is a QML-based browser. It has the absolute fewest features, and is no-nonsense in its quest to simply present a QML-based tabbed browser.

The downside is, the thing really is a skeleton, and there’s a huge amount of basic functionality missing. Compared to the other browser, the question is whether or not I want to strip something down, or build something up.

The Plan

The plan is to start a new browser from scratch (or, re-start), as I was going to previously. I’m currently in a debate between traditional widgets or a QML driven UI.

I should go QtQuick, as it has some clear advantages and is another differentiating factor vs contemporary widget-based browsers. If I do so, I’ll instead base of QtNanoBrowser or at least refer to the nano browser. About the only reason I’m debating QML is because we had an agreement to disagree, but I should really just accept it and drink the juice.

So, as it was (my little multi-process experiment) is dead! But the time and effort that will be saved will go into creating something richer, different, and modern.

An Update… With Fiber!

Updates! Where I reveal things that are obvious, some which are lesser-known, and hopefully one that takes somebody by surprise… But before I need to answer why I’m not posting this on PlanetKDE, it’s because it’s non-news IMHO, or at least, I don’t think it needs to be news. 😛

At the moment, I have 4 KDE-related projects on the go.

The first is my mockup assembly line and/or blog posts. I haven’t had any requests for mockups, and this is a blog post. CLEAR.

I (think) the next version of Plasma will have a new wallpaper by another VDG member, but I’ll double-check how that’s going. We still have the sunset wallpaper in the wings, and I can make a new wallpaper if I need to pick up any slack.

Right now the Chroma Window Decoration is on hold for a few reasons. Mainly I’m lazy, and I don’t have a good setup to test without messing up my build setup. For the time being I’m pretty happy with the current Breeze windeco, and all I’m bringing to the party is some slightly fancy buttons. That being said, I’m still excited to bring it, but it’s moved down a few pegs in the importance scale over the past few weeks.

If someone wants to take over the current todo (KCM integration) and make the decoration functional for the wider world, I can totally update the repo with my latest butcherings.

Next is the KDE.org websites. about which I’ve been quiet, very quiet. For those not in the loop (pretty much everyone outside of a few devs and the VDG) I’m cooking up what will hopefully be a successor to the current KDE.org sites.

Progress has been slow though. Mostly because I’m a web developer by day and it would destroy my brain if I developed websites around the clock. For the most part, KDE.org related stuff has been limited to Sunday projects because of this. Within the next week I’ll be finished with web development at my work and switching into IT mode for a bit, so I should have the drive to produce a demonstration site “soon”.

There is a (hidden) KDE forum with exactly 1 post in it, and soon I’ll request it be opened up, hopefully around the same time I get the demo site together. It will be for the effort of modernising the KDE websites.

Which brings me to my fourth project;

Fiber Browser

Not sure how comfortable I am coming out with this project as early as it is – so my dear lovelies – please keep this on the “down low” (as your “peeps” might say).

Lately I haven’t been 100% happy with browsers for Linux, and I decided to start a new KDE Frameworks 5 based browser from the ground-up; I’ve taken to calling it Fiber. It’s been in planning for some time now, and I’ve landed enough code to build my confidence in the feasibility of this as a thing.

Between the KDE website refresh and Fiber, these two projects will likely account for the majority of my mid-term KDE-related contributions.

What is it?

fiber

Fiber Icon

Fiber is a Qt5-based browser using the new Qt WebEngine based on Blink. Currently, Fiber is only prototype-quality code, an icon/logo, and a slew of specifications I’ve been planning for a while now.

Like Google Chrome and Chromium, Fiber is a multi-process browser. Specifically, Fiber manages multiprocess browsing by using QProcess, QWidget based window containers, and DBus.

From a main window, Fiber will launch “WebEngineWindow” processes via QProcess. After a short handshake between QProcess and the WebEngineWindow for the purpose of initial embedding, Fiber switches to DBus as its IPC to talk between the processes.

Fiber should be display-server agnostic and fully capable of running on Wayland; it does not use Q11XEmbedContainer. Fiber should be portable between platforms, though for Windows I believe extra work is required for DBus. That being said, my development/testing will be limited to my current distro, and I’m probably going to be too lazy to package it most of the time.

In may day browsers had interfaces! We had Mosiac! I don't get this "miminalism" jive!

In may day browsers had interfaces! We had Mosiac! I don’t get this “miminalism” jive!

Currently there’s no formal UI; Fiber at this point will start itself, start its first child at “about:blank”, embed the process, and over DBus send a request to hit google.com. Closing the main window neatly quits out the WebEngineWindow process, and manually crashing the WebEngineWindow process leaves the main UI unscathed – process isolation, baby! I only need a clever “He’s dead, Jim!” graphic.

Currently the browser is using Vanilla Qt, but once the underpinnings are grounded I’ll be using KDE Frameworks features. I don’t know if I’ll be using QML, but if I do it will likely be in the context of extensions and special pages. The main UI is being traditionally programmed for now.

Why not fork/contribute to existing browsers?

One of the main goals for Fiber is to be a fully multi-process affair, and will be the justification for its existence. Currently none of the Qt-based browsers I am aware of are multi-process, and I’m not keen to layer such a significant transition onto projects not designed around it, either in a fork or as contributions.

Additionally, I’ll be sounding klaxxon on release warning people about the fact that the first releases will probably be an unstable security hole held together with duct-tape; I don’t want to inflict that onto an established project.

What are the goals?

There are 3 major goals for Fiber (presented in order):

  • Deep KDE technology integration.
  • Present a polished, stable, modern experience.
  • Be simple by default, powerful when needed.

Integration with Frameworks/Plasma will be a key feature, my hope is to have Fiber promote banner features Frameworks and Plasma provides. Additionally, I will attempt to have Fiber follow ‘KDE’ trends. While Fiber will have a ‘KDE first’ attitude, if functionality and polish for wider environments can be maintained – it will be.

A major goal for this browser to have the same level of polish that the “big 4” browsers have. Between adding a feature or improving the existing functionality I will always vote to improve what we’ve got, though I’m not out to make a ‘lightweight’ browser. Essentially it will do what it does well, and it won’t compromise to do what it does. I plan to place an emphasis on visual polish, but every effort will be made to ensure things like the rendering engine will not be left behind.

The VDG has the mantra “simple by default, powerful when needed”. The plan for Fiber is to offer as much functionality as it can in the form of extensions, and roll-out simple, stable and interchangeable components offering basic functionality, but allowing power-users to push those aspects further and harder. Additionally, Fiber will use the concept of “tab profiles” as a method for managing features on a per-tab level. E.g. ‘private’ tabs would simply be a profile configured to privacy. I plan to include a developer profile. Eventually I’d be interested in users being able to specifically launch Tor/Proxy tabs – but that’s far down the road. This means that we can have advanced developer features able to roll-out en-masse when requested, but keep them out of the way for casual users unless called upon.

What Licence?

not sure! Since everything is being written from scratch, I have liberties here. The likely answer will be GPLv2, and I figure if I start borrowing code that decision will be made for me.

Making Sense of the Kubuntu Council Leadership Spat

By now the news has spread quite quickly; the Ubuntu Community Council (or “CC” for short) had attempted to boot Jonathan Riddell as a community leader, asking him to “take an extended break” from the Kubuntu Council (“KC” for short) citing personality conflicts and breaches of the Ubuntu code of conduct.

So, what just happened? On the various news sites and through some broken telephones there’s several misconceptions about what happened. Being an outsider the whole issue is rather complicated, I know nothing of the structure around Canonical, Ubuntu, and these councils and how all this relates to Kubuntu.

This isn’t going to be a post about the he-said-she-said arguments, but is more of an outsiders explanation into how all this fits together and what it really means.

I’d like to mention I’ve received corrections in the comments, and would like to give a thank-you to the commenters for their feedback.

What is the Community Council? How does it work?

The Community Council is the highest governing body representing the Ubuntu umbrella of projects, including its derivatives. The CC is a democratic organisation with 7 seats available for elected representatives and a 8th tie-breaking seat being reserved by Mark Shuttleworth. The group uses a well defined electoral process which receives votes and nominations from the Ubuntu membership and community at large.

The group manages non-technical communication and governance of the Ubuntu project and derivatives. An important part of this event is the mandate that the council operates transparently to the wider community, the idea being that they would also serve as a bridge between the commercial arm of Canonical and the open-source community at large.

What is the Kubuntu Council?

Just like a larger governing body, the Community Council has delegated sub-councils to represent larger projects within the community. The Kubuntu Council is one such branch managing the KDE-oriented Kubuntu project. Like the CC, the Kubuntu Council is composed of members elected by the community.

When the system works the idea is that the Kubuntu Council will take care of project-level matters independently, and the Kubuntu Council lead will attend meetings to trade information and matters upstream with the Community Council.

So… Does Canonical Own Kubuntu?

I will note here that Canonical is not one of the active parties in this dispute – this section is only meant to clarify misconceptions I’ve seen online, and to help explain the next sections.

Canonical owns the trademark for Kubuntu – so as a ‘brand’ they own Kubuntu. Beyond that Canonical does not directly fund Kubuntu, instead they offer infrastructure in the form of repositories and servers, where Kubuntu is allowed to piggyback off the Canonical/Ubuntu project network and work more closely with upstream resources.

But Canonical does not employ the Kubuntu staff; previously they did employ staff but Blue Systems stepped in when Canonical cut funding. Blue Systems has since become a much larger part of what drives Kubuntu than Canonical. Both of these together have made Kubuntu (as a project) much more than a solely Canonical venture.

In over-simplified terms Canonical owns the franchise and Blue Systems runs the hottest ‘non-headquarters’ location.

Who is Jonathan Riddell?

Jonathan is an ex-Canonical employee who was scooped up by Blue Systems after Canonical cut funding.

Part of Canonical cutting Kubuntu funding was terminating Jonathan as an employee of Canonical. He essentially retained his position in all community aspects of Ubuntu, just without the paycheque: he is a Kubuntu Council member, has access to the Canonical infrastructure, and helps manage the Kubuntu project.

Blue systems picked him up and he is able to work full-time in an almost identical capacity that he did as a Canonical employee.

What was the Ruckus?

Mainly, there’s some conflicts between Riddell and members of the core Community Council. Riddell had repeatedly pushed several issues which the council was unable to fulfil, leading to frustration on both sides. In the end both sides showed the stress they were under, at which point the Community Council privately decided they would oust Jonathan from the Kubuntu Council.

The KC replied arguing that the decision was not made transparently, questioned how much power the Community Council should have over the community-elected Kubuntu Council roster, and was incensed by the CC not retracting the decision before a transparent conversation. The Kubuntu Council didn’t want to negotiate “with a gun to [their] heads”.

Who Ultimately Gives the Orders?

The Kubuntu Council is bound by their constitution to obey “legitimate orders” from the Community Council; if the CC makes a decision in line with the Code of Conduct and its own constitution the Kubuntu Council must obey that request. But no provisions have been made for when the two groups disagree over a decision. The Community Council may be forced to cut off Jonathan or supporters from Ubuntu support infrastructure, such as Canonical repositories and funding, and the group has already stated that he is keeping his upload rights and ability to request funding. However given the hostilities, revoking those privileges might be a hardball solution, and one that the Kubuntu Council may not have control over.

The reason Kubuntu believes it can reject an authoritative attempt is threefold; it had never happened before so there was no ‘precedent’, there was no warning for Jonathan to correct the ‘behavioural issues’, and the largest reason is because the Kubuntu Council does not feel the decision was legitimate.

The entire issue hinges on the legitimacy of the order; Kubuntu Council only has to obey legitimate orders, and questions whether a decision made behind closed-doors when the mandate is transparency be considered legitimate.

In short: yes the Community Council can remove people from its sub-councils, but it might have terrible fallout if done improperly. They can’t really tell the Kubuntu crew what to do if Kubuntu doesn’t find the orders legitimate. But if push comes to shove it is possible for the Community Council and Canonical to revoke infrastructure access if a resolution cannot be found.

What Happens Now?

Right now the Community Council is exerting control over projects using their infrastructure much like a company would manage employees; if someone isn’t in line they can be moved, removed, or suspended without public debate.

The problem with this strategy is the fact that communities don’t like being dictated to, and in attempting to do so rubbed the community the wrong way. The Community Council literally gave an order and the Kubuntu Council said “no”. So what happens now?

By removing Jonathan from his position in the Kubuntu community, it also affects his value for Blue Systems. If he were removed, it brings into question what Blue Systems and the community would do in response; Riddell is a Blue Systems employee and carries significant community favour from KDE users.

The first thing that can happen is… Nothing. Birds will sing, grass will grow, and the KC will make the CC grit their teeth a bit. Maybe Jonathan will be removed after a more transparent meeting, maybe not. If the KC doesn’t remove Jonathan, then it may force Canonical into an awkward situation where it must back the council and start cutting off infrastructure.

Second, if this is resolved, Mark and the Community Council may revise its community strategy and put in safeguards for these situations and possibly enforce a more formal structure over the ad-hoc sub-community model. This would need to apply to all communities as singling out specific projects would simply inflame the situation, in the future preventing other projects from entering a similar situation.

Third, instead of a split the Kubuntu crew might attempt to separate their internal governance a bit; possibly designating a separate group to work with the Community Council while the main leadership remains as-is. Ubuntu can work with their partners effectively without disturbing the leadership, but this solution complicates communication and doesn’t fix several underlying issues.

The next thing that may happen could be the start of a more gradual separation; Kubuntu as a project may slowly take on more infrastructure, growing apart and leaving the nest – maybe with Canonicals blessing and the transfer of the Kubuntu trademark. Who knows.

Lastly both sides could calmly file into a room before sizing up chairs to throw at each other; terrible words being said about peoples mothers before forking Kubuntu into ‘Librebuntu’. This would hurt as the Kubuntu and KDE developers already have poor relations with Canonical, meaning a fork would likely lead to a mass exodus from Kubuntu to the new project (much like the LibreOffice fork). While the freedom of not having Canonical or the Community Council dictate policy would be refreshing, the loss of infrastructure would be a certain setback.

In the End… ?

In the end, I think we all simply hope that projects, companies, communities, and benevolent dictators can all work together in relative harmony. The situation isn’t ideal, but a major part of building strong communities is occasionally finding out something doesn’t work – and fixing it; hopefully to the benefit of everyone involved.

Right now both sides are holding strong in a ‘grey zone’ with their actions – the CC seems to be meting out harsh decisions without clear policy, and the KC is refusing to listen until the CC backpedals on its position.

That’s my breakdown of the politics; I hope it helped and provided insight into this whole messy affair. I hope to gets all sorted out in the long run. If I have anything wrong, please do let me know in the comments and I’ll make the relevant corrections.

Software vs. Philosophy; Raging against Microsoft as a Company is Backward

Today the FOSS world was shaken a bit with some of Microsofts announcements, mainly after the announcement of a cross-platform version Visual Studio which has a native Linux version. While not strictly their original brand-name IDE, it’s still a big announcement for Microsoft to put one of their top brands so ‘quintessentially Windows’ onto Linux… But the most interesting part of the announcement was not the release, but to me, it was the 3 distinct groups of onlookers who have been commenting on the news that the Redmond giant has quite boldly stepped far deeper into open-source wilds than it has been before.

The first group of people are the ones who have been supportive, praising our ‘enemy of old’ for moving away from lock-in and towards turning a new leaf; especially since it directly conflicts so completely with how they have historically monetised their business. Previously for Microsoft to win “everyone else had to lose”, but it has become apparent that this mindset is no longer in their DNA.

There’s the group of people who are looking at the software as what it is; a new development IDE which may be better or worse than contemporary Linux development applications. Some have noted it’s a fork of Atom, and while it disappoints some who wanted to think it would be a pure-MS codebase hitting the light of day – it’s still interesting to see Microsoft release products in the true nature of Open Source, where we fork software to make improvements we believe will serve it best.

But the group I’m most interested in addressing is the haters, the people who refer to Microsoft as “M$” and spit on any work the company produces. The people whose philosophical hate of yesteryears software giant continues unabated, their seething vocal loathing denouncing their work as the next plot or substandard because of its ‘capitalist origins’.

I’ll admit I went through a ‘zealot’ phase when I got into Linux – because I was young and stupid and half a hipster. The first year I thought I was awesome for ‘being free’ and ‘sticking it to the evil companies’ like Microsoft. I refused to use non-free drivers, and thought I was liberating myself by jacking-in my laptop because there wasn’t a free wireless driver. My setup was sub-optimal, and I was stupidly proud of my broken barely-functional equipment.

Today I find the functionality and flexibility of Linux suits my personal development habits, I find the desktop pleasingly functional, and I use software that works for me – regardless of the source. I use Steam because it enables me to entertained without rebooting my computer, with AAA-games such as Bioshock Infinite and Cities: Skylines running perfectly. I use the Xbox controller because extended play on any other input will hurt me. I appreciate that there are free alternatives which offer me a guarantee of ‘shenanigan-free’ computing, but where the software is good I will use it, even if it’s closed. If Microsoft releases products on Linux I may use them if they have a place – even if those applications are not free software.

When it comes to hating Microsoft, to me, that idea no longer makes sense. I will freely say I do hate and loath *parts* of the company, but to hate the whole umbrella regardless of the people involved is becoming backward. I love the teams who are saying “hey, lets get into open-source” while also raging against the legal arm attempting to leech from Android. It’s the same with Google; I love the parts of Google that sponsor open-source events while being wary of their disturbing advertising model.

You could argue that even if you only support the positive sections of a company the negatives benefit as well; that by supporting the Visual Studio team you’re potentially helping the slimy legal arm survive – but in reality if Microsoft sees support and benefits from better alternatives, they will shift their resources in that direction. A company that large requires time to turn the ship around, and there’s no real point in taking pot-shots at them when you can see their teams genuinely charting into such unfamiliar waters.

The fact is Microsoft isn’t a single hive-mind nest of businessmen looking to suck every dollar from the digital age. It’s thousands of upstanding people with real human problems who genuinely want to see the software they write improve the world. I don’t see cronies stepping onto public transit disturbing the bus driver because of their maniacal cackling – the world didn’t see an uptick in animal sacrifice and Hot Topic sales as Microsoft recruited its developers.

Am I going to use this new cross-platform Visual Studio? Probably not – I’m getting familiar with Qt Creator – but I will genuinely try it at some point.  For whatever reason the Redmond camp has become friendlier with open-source… Be it the fact that they aren’t the 800-pound gorilla, that Gates and Ballmer are no longer at the wheel, or because open systems are dominating new markets; it doesn’t matter. The company is improving its philosophy, and I think we’ll be the foolish ones if we dismiss it. If you’re a hater, hop onto the bandwagon of people paying attention to what they do – they’re publishing software in open waters, we’d be morons not to encourage, extend, and integrate.

Chroma Update

So, where’s Chroma, the experimental window decoration Breeze fork? Still not released yet.

The main hurdle is the fact that Chroma previously overwrote Breeze; once you installed the Chroma repo Breeze would be kicked out like a bad room mate.

Not having both is obviously no good. If Chroma breaks and crashes Kwin, it will restart and attempt to use Breeze, instead loading Chroma… And we get into a crash loop, require users to drop to a terminal, and install an alternate DE or window manager. Blegh. Ugly.

(Not that I believe it would do that, but if I did it to one person I’d feel super bad)

The cheap and obvious solution would be to just open my project directory and do a find->replace for ‘Breeze’ and replace it with ‘Chroma’, and I’m sure that would instantly resolve all the issues – but it would completely undermine my ability to easily pull/push back to the main codebase if I mangle it.

community_image_1415688366

What I don’t want the Chroma codebase to be

Essentially, I want Chroma to read as Breeze in code, and I want both codebases to easily share between each-other without naming breaking things.

So, where are we?

Right now Chroma is installing, but there’s some quantum fiddly-bits which get all timey-wimey; when you install Chroma you are presented with two Breeze decorations in the KCM. Because I’m still inexperienced with this stuff, I’m still in the process of tracking down where I must rename Breeze to Chroma to get it registering properly, but I’m taking my time because I don’t want to rename things needlessly.

So right now it should be done ‘any time’, once I realise what minor tweaks need to be made so we can get Chroma and Breeze co-existing nicely. I’ll also admit currently Chroma isn’t my primary focus, so more/less I’m just taking the odd hour when I need a breather to browse through the code and see what needs to be done.

Cropping workloads and deciding what’s important

The FOSS community is amazing, and as often we may hear it has problems there’s one serious issue I’m sure we all agree with; we will always have a need for more contributors. Every project is starving for people – I couldn’t name a single project which isn’t on some level.

What we lack in fleshy human caffeine-to-output converters we make up for with passionate members, and the people who are part of projects are more often than not the insanely dedicated heros who churn enough work to equal more than a few of their peers. A huge number of insanely important projects are usually headed up by single individuals.

In FOSS you very quickly get noticed when you contribute, even if it’s a small contribution to a high-profile project. Once you get noticed other projects may ask for you, people who belong to multiple projects will ask to introduce you to other teams, and before long you realize you’ve gone from doing a couple projects well to several projects poorly. This presents a whole new problem I have recently come to terms with: You can’t contribute to every project.

I got all sour-apples about it with myself, one of those “you idiot!” inner monologues. Last week I said ‘yes’ to another project, and today I sat down and realised I was wasting peoples time. The person who invited me was catching me up, in the hangouts people were being patient while I straightened out my facts, and I contributed nothing. Using my crystal ball labelled “common sense” I divined that I’d probably only get an hour or two a week to offer up. Not nearly enough for the scale of that project, at least when you must budget time like a precious commodity.

In my seat I wrote out a list of projects I have on the go, and realised the number I produced was “too many”. I slumped, because I wanted to contribute to them all and I had to do the worst thing ever: start looking at projects to step back from. It sucked.

The problem with being attached to a project which you’re not really contributing to is that it can be a severe detriment to the people who are actively contributing; they may ask you to take care of a task, and what should have been a 2-day knockout turns into a 2-week slog, causing delays and problems.
So, I’ve stepped back from a handful of projects I had joined up with; No fears for anyone wondering if “you’re next”, since I’ve already sent out messages to the projects I’m stepping back from. Right now, I want to keep focus on at most 3 projects.

I’d rather do a few things well, than many things poorly. Hopefully, over the coming weeks, the projects I’m still involved with will see a stronger push from my end again, and adequate waves will be made.

Buzz Buzz!

With the Sprint behind us and the Freeze coming up next month, the VDG has made it’s agenda for the coming weeks, and I figured I’d share some highlights I’m working on, and a couple I’m personally looking forward to.

walls

Wallpapers

Andreas is doing a fantastic wallpaper contest which I hope many of you will participate in; the goal is to get wallpapers for weather-based wallpapers, and to also get several new wallpapers into circulation. It doesn’t have to be weather wallpapers – if you have a wonderful high-resolution graphic, submit it!

In addition to pulling in community work, we’re going to change up the release cycle for new wallpapers: Previously, wallpapers were updated every second version, but now we will add a new wallpaper for every release of Plasma!

lesscreepy

Avatars

The current crop of standard avatars have aged gracefully, but we’re looking to refresh them. We have a new crop of avatars based on history-changing individuals and fairy-tale children. Eventually we will expand the set to include a range of personalities.

Credit for the design goes to Jens Reutersberg who created the fantastic VDG profile pictures.

deco

Decorations

We may be looking forward to a new alternate decoration; originally based on Breeze and the result of a first-time hack gone too far, “Chroma” may be appearing! Somewhere! Where it shows up all depends on my laziness and incompetence. I assure you I am only mostly lazy and incompetent.

There’s a few things that need to be done with it, mostly in regards to properly breaking it out from Breeze to be it’s own decoration, and learning to properly submit it.

Akademy

Several VDG members are looking to show up at Akademy, so if you’re looking to hear an awesome talk or two there will be some design talks in the future. I won’t go spoilertastic, but I’m looking forward to it, or dying trying to get there – you should be to!

Plasma Sprint 2015

Just over 2 weeks ago I stepped off a plane, putting my heels onto Canadian soil after spending a week participating in the Plasma 2015 Sprint. The entire experience was exhausting in the best of ways, and after landing home my throughput was thoroughly trounced for some time as I settled back into normalcy. But lets rewind to the beginning;

The day of my arrival in Barcelona it would be a far cry to say I was nervous – in the moments before pressing the buzzer I was in a downright terror! These people will realise I’m an idiot! Ship me back to Canada on the next canoe! Needless to say only minutes in to the sprint not only were my worst fears completely unfounded – but I met a group as welcoming as they were brilliant.

Finally, I think I have the perspective to share my experience. I won’t try to recap the entire event, I will mainly focus on VDG work.

But first! The People of KDE

I met about a dozen dedicated and hard-working developers in the Blue Systems office during the sprint, and it needs to be said just how great these people are – each and every one passionate about their respective fields and projects. I’d really just like to give a shout-out to everyone I met in the Sprint. They’re the kind of people who make you smarter by proximity, and they welcome you to do it. For anyone invited to a Sprint I highly recommend jumping on the chance; you will be enriched for doing it.

IMAG0387_BURST001

Drawing Konquis

After arriving mid-day Jens Reuterberg headed the idea to begin creating and stockpiling promotional graphics. Essentially we wanted vector artwork which could be used easily for things like release announcements, large print materials, web pages, etc. Jens dove head-first into logotypes, and I splintered off into doing up a pair of vector Katie and Konqui graphics during my half-day; Konqui being a direct trace, and Katie being new. You can view the original graphics by the talented Tyson Tan here.

g4358

Download KatieDownload Konqui

VDG <3 Developers

There was a great deal discussed during a pair of review and planning sessions in the first two official Sprint days. One of the biggest things (for Jens and I) was helping the VDG and developers interoperate better; for those who don’t know, the VDG communicates very differently than mainline developers.

Devs tend to focus on bug reports, mailing lists, reviewboards, and IRC. Members of the VDG tend to use Forums, Hangouts, and to a limited extent IRC. Immediately there’s very little overlap, which means at this point developers have to go to the forums to wield the VDG.

The problem lies in how forums operate; where the VDG design processes benefits from the relative chaos, it’s not good for developers looking for the ‘final word’ of the design discussion. It’s further impacted by forum conversations which don’t have definitive conclusions, or discussions which can get muddled down. When developers go to the forums they need a solid final product to build around – but on multiple occasions they end with a half dozen different designs and no clear answer on what they should do.

It was a short discussion during the Sprint, but Jens and I both immediately agreed that this is an area where the VDG must step up and refine our process.

The current idea will be sticking with the forums threads as the main creative area, but changing the way they spin down. Once we feel a design discussion has gestated, the VDG aims to have a member pull the ‘final’ design from the conversation, at which point they’ll put together a coherent deliverable developers can understand and act on, on a channel they are comfortable with.

There are still details we are ferreting out before we more formally put this into motion, but the essential aim is to move the VDG into a position where we can reliably ship usable deliverable design, on a channel developers can comfortably handle.

Breeze Applications

This only came up briefly during the Sprint as well, but is something which has been brewing for a while now – so it might be worth mentioning ‘for realsies’, essentially since I don’t think anyone pointed out that this is a ‘thing’;

KDE and Plasma have a bit of a history with names, and for many core applications we’ve been wanting a more consistent scheme for it all. At the same time, with every major tookit release (i.e. Qt4 -> Qt5) many applications need to be ported or re-written. Finally, on these major releases, visual/workflow trends have usually shifted meaning the experience of applications will also shift.

So, all this stuff going on, we figure it’s time to put a bow on it and turn this cavalcade of factors into one cohesive event, so we’ve come up with the concept of Breeze Applications.

The idea is that, coinciding with frameworks, trend, and design changes we will name a subset of the bundled applications after the current design. So for Plasma 5 we will have ‘Breeze’, for some future plasma version many moons from now we may have ‘Gust’ or ‘Wind’ applications.

What does this mean? The biggest thing is that we intend to use these ‘Breeze’ applications as standards bearers, which we hope to see other applications follow. It’s much the same way Google treats ‘Holo’ and ‘Material’, along with their base applications: This is the design, these are the examples. Ideally we intend to focus on only a few applications, which developers will be able to dissect and say ‘oh, this is the plan’. In addition, as new technologies and techniques land, we hope Breeze applications will be the frontrunners in adopting cutting-edge KDE/Plasma technologies.

Does this mean every Plasma or KF5 app will be named “Breeze X”? No. We only plan on Breeze-ifying the more simple core applications which can be easily maintained, kept up to date, and streamlined enough that the code could easily be used for reference material.

Fun fact: The bathrooms in Frankford are powered by Ubuntu!

Fun fact: The bathrooms in Frankford are powered by Ubuntu!

Dynamic Window Decorations

Before I even get started on this, I must give props to David Edmonston. The man is a trooper, and I feel almost as if I tortured the poor gentleman throughout the sprint.

During the sprint I presented some of my DWD plans; technical details were discussed, implementation questions were raised, and concerns were were round-tabled. The discussion was extremely positive and productive, and real issues were ferreted out.

IMAG0392_BURST007

One of the larger questions was ‘what IPC protocol should be used?’; I personally was educated about the Wayland protocol, and that it could be used even on ‘non-wayland’ systems – since it is just a protocol and not an installed library. Ultimately, the developers present agreed that D-Bus was the way to go, the general consensus being that the protocol is known and familiar, mature, battle-tested, and isn’t going to shift or break.

I also gave my personal thoughts on how applications might access/implement DWDs, and while there’s still considerable room for discussion, it seems to be on the right track. I was cautioned by developers and I feel the need to point out: even when the DWD protocol does pick up steam it will still be years before it’s available in any meaningful way.

During the development portion of the Sprint I managed to rope David into doing some DWD work on a proof-of-concept level. Through his efforts we now have a much better idea of what obstacles we will face integrating widgets into server-side decorations, such as ensuring the draw code runs correctly/efficiently. He heroically managed to get window decorations to draw usable sliders, so we do know window decorations are capable of drawing server-side widgets.

Sadly, the proof did nearly cost David his sanity. It probably didn’t help that I was giggling like an imbecile. Sorry about that, David. I hope the tea made up for it. :/

UI Feedback

Throughout the Sprint Jens and I were able to lend our services in helping to design and streamline interfaces. Towards he end of the Sprint we also did a walkthrough of the Plasma desktop and several components to identify surface-level bugs and weak areas.

This included an extensive review of the system settings utility and its KCMS.

I also managed to chip in some light advice with a new power-manager tool, and an upcoming redesign of the Baloo settings manager with Vishesh Handa.

And a Great Deal more!

As I mentioned at the start of the post, and can only mention again; There were a lot of really great people at the Sprint – and all of them had their own projects, goals, plans, and feedback. It was really impressive to meet people who had such a deep understanding of KDE Frameworks and Plasma, able to talk about extremely complex technologies in detail over a coffee.

I, personally, learned a great deal from everyone. From being unable to compile a package to now comfortably hacking, simply rubbing shoulders with the outstanding individuals was absolutely my privilege.

There’s a great deal not in this post, but I imagine other posts will fill in the rest… So on a closing note I will say again; if you are ever invited to a Sprint, don’t hesitate to say yes – it’s an amazing experience which is beyond worth it!

I drank this. I still don't know what it was.

I drank this. I still don’t know what it was.

How Bread is Helping Make Breeze Cursors Pixel Perfect

Some people accuse me of being a crazy person. Others are wrong. But occasionally the seeming madness of it all will bring about good things.

Last night was a sleepless night in all the good ways; I’m excited for the upcoming Plasma Sprint, and knowing I’ll be packing myself into a cigar tube and flinging myself across the North Atlantic Ocean is too much for me to sleep over. I had promised a commenter (too long ago) I would make green cursors, so I decided to make good on my word. After it took 5 minutes I needed more; and the wafting smell of my bread maker inspired me to make a Bread cursor theme. Once that was done, sufficiently delirious, I sent my weird bready message to the VDG. They appear to have ignored it – a wise decision. They’re busy people doing actual work.

bread

Today I opened up the cursors to see what I had done.  Nothing too terrible, and I decided it was worth polishing them up if just for the larfs. One of the touches was to add a half-pixel white outline between the crust and loaf for contrast.

When I rendered the tweaked cursors, they started to look awful because of how SVG images layer and clamp nearby vectors. Simply put on a vector edge, even if there’s another identical edge above it, both edges will affect their neighbouring pixels as opposed to the upper vector shape ‘blocking’ the lower shape.

sample1

The “desired result” is the result a designer would expect, while the actual result is technically correct.

This had the effect of making the hand of my newly minted bread cursor (with the most edges) look “washed out” because the two lighter inner layers were covering the outline.

sample2

The solution for this problem is to ‘supersample’ the cursors in our build scripts. Supersampling is when you render the image at a much higher resolution, and scale it down to the desired resolution. Instead of going directly from Inkscape to a final image, we first export each cursor to a temporary file which is 4x standard resolution and 2x double resolution. We then scale down and copy that image to the final resolutions.

The end result is going to be the option for us to more easily use sub-pixel detailing in cursors without worrying about losing smoothness; any extra detail may not be noticeable on a day-to-day level – but it’s the polish Plasma users are beginning to expect. Additionally, high-resolution cursors will also benefit because the half-pixel details will become full pixel details, and on a high-quality screens you’ll have ultra-sharp graphics.

breadAnd that’s how bread is helping make Breeze cursors pixel perfect!

Now, super-pixel-perfection isn’t that noticeable so there’s not going to be a rush to update existing cursors; but if one day you quietly notice your cursor is a little bit sharper than it used to be – you can thank bread.

Download The Cursors:
(extract “compiled” cursors to your icons folder to install, or download the source to edit or remix them. Golgari is a green/black theme)

Bread Source
Bread Compiled
Breeze Golgari Source
Breeze Golgari Compiled