On Monday we hosted a tech talk on what’s coming, and why, in Meetup 2 for Android.
Leave a comment here if you have any questions. And if you’re interested in helping us bring people together on Android and other platforms, we’re hiring.
Here's those slides.
Design matters, on Android
Tonight we hosted an Android meetup at Meetup. I took this fairly bad photo of Jimena, Mike, and John, from the back of the room on a Nexus 5. For the conditions, I think the 5 did pretty well.
They talked about the progress we're making in rebuilding the mobile apps as actual social networking apps. Our apps were originally designed as calendars only; we bolted on social and group features one by one, muddling navigation on Android in the process. This project has been about producing a coherent navigation scheme that can better expose current features and make some room for personalized content to come.
If slides are posted I'll link to them, but you'll also be able to install the new app yourself in the next few weeks.
Unfiltered and Slick
People sometimes ask for example "crud" webapps for Unfiltered. I've never made one because a) it's boring and b) most people use Unfiltered to make APIs. But we had Jan Christopher Vogt for our September meetup and I was reminded how much I liked Slick (or how much I liked ScalaQuery, what it was called when I last used it).
Making a Slick app would not be boring. For even more interestingness and less future-irrelevance I made it touch friendly. Looks like this:
It's a database for you to enter dog breeds and famous dogs of that breed. Super useful. If you want to update and delete, you have to write the code for that. Ruff ruff.
The unfiltered-slick.g8 template is pretty simple. I adopted the DB setup code (and ASCII art!) from Chris's ny-scala demo project.
If you want to try it out, install giter8 and run this:
g8 unfiltered/unfiltered-slick
My favorite thing in there is this little function that makes a directive for loading data into a Breed instance:
def breedOfId(id: Int)(implicit session: Session) = getOrElse( Breeds.ofId(id).firstOption, NotFound ~> ResponseString("Breed not found") )
This is built with Slick 2.0.0-M2. It uses H2 in-memory and I haven't tested it with anything else. If I didn't do something the slickest way, send a pull request.
Recentralizing the Internet
A few weeks ago I was minding my own business, flicking through twitter on my phone. Someone had linked to the appalling government surveillance story of the day and I was preparing to feel disgusted and helpless. Only this time, my mobile data provider took offense before I had a chance to.
Well, that's one way to keep the people in line! I let the linker know that the entire privacysos.org site was blocked by my service provider, but it turns out the blockage was a little more complicated than that. Most people using T-Mobile weren't having problems. And when I tried the same site using Chrome instead of Firefox, it loaded normally. What gives?
I figured that parental control settings were the differing factor between me and other customers, and indeed that seems to be the case. Though I never turned on a filter for my account, my phone company assumes I'm a "young adult" at that special age 17-18 when one must be sheltered from information about civil rights. Pre-paid customers are presumed to be juveniles, while grownups who pay more for traditional subscriptions get unimpeded net access (for now).
At that point I could have adjusted my account settings, but I still wanted to know why the site wasn't blocked in Chrome, and why an ACLU site was considered "offensive content" in the first place.
Digging into the Chrome question, I found one clue: the site was blocked in incognito mode, but not otherwise. A significant feature of incognito mode is that browser cookies from other sessions are not sent with requests — could it be that some cookie, like a login cookie for the T-Mobile account management site, caused the filterware to back off?
That shouldn't have been the case, since cookies are only sent for the domains they belong to. A T-Mobile login cookie shouldn't be sent with a request to privacysos.org, so how would the filter know to handle the request differently? Still, this was the best guess I had. I wanted to know exactly what Chrome was sending with those requests, and since they weren't passing through wifi or any other network under my control, I couldn't use Wireshark. Instead, I set up Chrome remote debugging.
And finally, in the network inspector, I spotted the gremlin: Google's Data Compression Proxy.
via:1.1 Chrome Compression Proxy, 1.1 Chrome Compression Proxy
Aha. T-Mobile had been cut out of the web traffic filtering business as a side effect of Google's own web traffic optimizing business. To test this theory, I looked for a switch to turn off Google proxy. But surprisingly, it just wasn't there.
Google has started to seed this option into the Android Chrome application as a split test and most likely I did agree to turn it on at some point, but in my case the settings toggle which should appear afterwards, didn't. I spent some time clearing app caches, uninstalling, and reinstalling — nothing caused the option to appear. Eventually I installed Chrome Beta, where the proxy option does reliably appear under the oblique label "Reduce data usage". In addition to reducing data usage, I was able to confirm that it handily circumvents T-Mobile's primitive content filtering.
But don't break out the champagne just yet, 17-18 year olds! While I appreciate that Google's proxy is engineered to improve performance generally (like other proxies before it), it would be foolish to ignore that it is also a filter.
All I can really do here is change masters, from one single point of control to another. Indeed, Google's proxy is disabled when in incognito mode — how is a "secure" mode unsuitable for private browsing?
Ultimately this isn't a choice between different levels of privacy, but a choice between different vectors of exposure.
A lot of people still trust Google, with some justification. But as with any transfer of power we should consider its implications not just for the current regime but to the next one that will presume to inherit it, and the one after that. If Google is slightly more "evil" every year, how do we feel about Google having full knowledge and control over our web browsing in n years?
Google's proxy stands to control increasing portions of web traffic, eventually majorities. We can chuckle (and I do) at how it thwarts a crusty old phone company's content filter without even trying, but there will a come a day when a carrier refuses to allow Chrome as a default browser on their crapware phones unless their own content filtering is integrated with Google's. And then what?
Having solved the mystery of Chrome, I went back to my phone company and asked why they were blocking an ACLU web site as "offensive." They of course asked me to email some blackhole instead of making my requests in broad daylight. So I did that.
To: [email protected] Subject: unblock request
Hi, I noticed that this site is blocked from "young adults" for its "offensive content": http://privacysos.org/
The site is published by the ACLU of Massachusetts and has information about privacy rights online. It does not have any offensive content that I have been able to discover. Could you correct this?
Nathan
No one replied to my email, and privacysos.org remains blocked to 17 year olds — or more specifically and ominously, it is not considered to be "content suitable for age 17 and up". As such it's likely blocked for far more people in the and up category, normal old people who haven't taken a deep dive into their account settings to assert their adulthood multiple times.
Having satisfied my curiosity I finally did turn off T-Mobile's sex/ACLU filter, but to do so I had to "prove" I'm at least 18 unwholesome years old by giving my name, address, and part of my social security number. So much for "you restrict access to adult web content on your family’s T-Mobile phones" — this step's only purpose is to prevent young account holders themselves from disabling the filter.
Like all censorship schemes T-Mobile's is ruled by prejudice rather than consensus — it is "not foolproof", in their cute phrasing. The first and only thing it has blocked for me is information that 17 year olds ought to know as they prepare to accept the responsibility to vote: their basic rights as citizens.
Untupled
Unfiltered validation has been on my to-do list for as long as there's been an Unfiltered. I tried a few different approaches, none of which were good enough to move it to-done.
First I tried doing it with extractors, the golden hammer of Unfiltered.
Requests are commonly accompanied by named parameters, either in the query string of the URL or in the body of a POST. Unfiltered supports access to these parameters with the Params extractor.
-- Extracting Params
The results were unsatisfying. If you want typed results, and you probably do, the only thing you can do with unacceptable input is refuse to match it, typically responding with a 404.
I never expected the extractor approach to work out, but the extractors are easy to build and easy to use -- for mediocre results. My grander scheme was to support parameter validation such that it would be easy to these things:
- build your own reusable validators.
- build your own validators inline.
- respond to multiple unacceptable parameters with multiple error messages.
And I tried for a while to do this by hacking for-expressions. It was difficult to build, and difficult to use. I didn't use it much myself and periodically forgot how it worked. Something kept me from ever documenting it. Common sense?
Later, Jan-Anders Teigen contributed directives to Unfiltered. Directives use for-expressions to validate requests in a straightforward way. Missing a header we require? Respond with the appropriate status code. You can also use them for routing, by orElse-ing through non-fatal failures. What you couldn't do was accumulate errors, my requirement #3.
I had a week with no internet in the Adirondacks, and it seemed as good a time as any to face my old nemesis, parameter validation for the people.
The first thing I did was to build into directives a syntax for interpreting parameters into any type and producing errors when interpretation fails.
This took some time to get right, mostly choosing what to call things and crafting an API for both explicit and implicit use. Define your own implicit interpreters to produce your own types and error responses, then you can collect data like a lovesick NSA officer.
val result = for { device <- data.as.Required[Device] named "udid" } yield device.location
This could report failure in different ways according to your own interpreter; the udid parameter is missing, not in the right format, isn't in the database, the database is down, and so on.
After this I decided to tackle my dreaded requirement #3. This time I wouldn't abuse for-expressions or change the way directives fundamentally work. When directives are flat-mapped together, the result is a mapping of all their successes or the first failure.
So how do we produce multiple responses for multiple failures? I settled on the idea of combining multiple directives into one, which would itself produce a mapping of all their successes or a combination of all their failures. This combined directive would then, typically, be flat-mapped to other directives in a for-expression.
The combination step is easy enough to express, even if it was a little tricky to implement.
scala> (data.as.Required[String] named "a") & (data.as.Required[Int] named "b") res1: unfiltered.directives.Directive[ Any, unfiltered.directives.JoiningResponseFunction[String,Any], (String, Int)] = <function1>
The first type parameter of Directive has to do with the underlying request, the second is the joined error response type, and the third is the success type -- a tuple of the two directives' success types.
The joining method & produces a tuple so that successes preserve all their type information. We might use it in a for expression like so:
(a, b) <- (data.as.Required[String] named "a") & (data.as.Required[Int] named "b")
But what happens if there's more than one independent parameter?
scala> (data.as.Required[String] named "a") & (data.as.Required[Int] named "b") & (data.as.Required[BigInt] named "c") res2: unfiltered.directives.Directive[ Any, unfiltered.directives.JoiningResponseFunction[String,Any], ((String, Int), BigInt)] = <function1>
Oh dear — our tuples are nested. To assign the values now, we would need to nest tuples exactly the same.
((a, b), c) <- (data.as.Required[String] named "a") & (data.as.Required[Int] named "b") & (data.as.Required[BigInt] named "c")
This could get rather confusing, especially when we want to add or remove a parameter later.
To understand why the tuples are nested, think about what the & method does. For d1 & d2, it produces a new directive where the success values are a tupled pair. It does this always, according to its return type.
Now consider the case of three directives: d1 & d2 & d3. We could write it without infix notation: d1.&(d2).&(d3). It's clearer still with parenthesis grouping the infix operations in their normal order of evaluation: ((d1 & d2) & d3). With repeated applications of & we'll necessarily produce typed, nested pairs. You can see a simpler example with the standard library's tuple constructor:
scala> 1 -> 2 -> 3 res3: ((Int, Int), Int) = ((1,2),3)
So this makes sense, even if we don't like it. We'd rather access the results as if the structure were flat. A Seq would allow that, but we'd lose the component types. Another approach, would be to apply a single joining function across all the directives we want to combine:
a, b, c <- &(data.as.Required[String] named "a"), (data.as.Required[Int] named "b"), (data.as.Required[BigInt] named "c")
This looks pretty nice, but it comes at a high cost: the function & would have to be defined specifically for 2 arguments, 3 arguments, and so on up to 22. And if somebody wanted to use it for 23 independent parameters, too bad. You'll see that kind of code in some libraries including the standard library; I think it's usually generated. But I really didn't want to add it to Unfiltered if I didn't have to.
And I didn't have to.
In Scala, common data types are built into the standard library instead of the language. So we can do things like this without List being special, or known to the compiler at all.
scala> val a :: b :: c :: Nil = 1 :: 2 :: 3 :: Nil a: Int = 1 b: Int = 2 c: Int = 3
In order to support rich user-defined interfaces comparable to language-level features in other languages, the Scala language itself has features like infix notation that are surprising to the newcomer. This is all Scala 101, but on occasion I'm still impressed by the possibilities that basic design decision gives to the programmer.
Let's try our list example again, with grouping parenthesis.
scala> val (a :: (b :: (c :: Nil))) = (1 :: (2 :: (3 :: Nil))) a: Int = 1 b: Int = 2 c: Int = 3
Because of the right-associativity of methods ending with a colon, the nesting is reversed, but you can probably see that a solution to flattening our nested tuples is getting closer.
The standard library provides a :: case class as a helper to the :: method of List, and like all case classes it has a companion extractor object. The above relies on infix notation for the extractor; the constructor style makes it a little more plain.
scala> val ::(a, ::(b, ::(c, Nil))) = (1 :: (2 :: (3 :: Nil))) a: Int = 1 b: Int = 2 c: Int = 3
That's just how you expect case class extraction to work. On the right-hand side, :: is a method call on a list but its definition constructs the same case class. See for yourself. So actually, it's more like the standard library provides the method as a helper to the case class.
That's great for lists and we'd like to do the same for a pair, but we can't use the same case class technique since Tuple2 is itself a case class. Not to worry, even though case classes are a language feature, the extractor functionality they use is available to any object. (It was a short holiday for this golden hammer.) We'll call ours & since we want it to partner with the & method of Directive.
scala> object & { | def unapply[A,B](tup: Tuple2[A,B]) = Some(tup) | } defined module $amp
Let's try it out with modest constructor notation first.
scala> val &(a,b) = (1,2) a: Int = 1 b: Int = 2
Infix?
scala> val a & b = (1,2) a: Int = 1 b: Int = 2
Nesting???
scala> val a & b & c = ((1,2),3) a: Int = 1 b: Int = 2 c: Int = 3
Sweet!!!
Now we can assign arbitrarily many nested success values from combined directives in simple flat statement.
a & b & c <- (data.as.Required[String] named "a") & (data.as.Required[Int] named "b") & (data.as.Required[BigInt] named "c")
And that is basically how it works. One caveat is that Unfiltered already had a & extractor object for pattern matching on requests, but I was able to overload the unapply method without issue — once again, I'm rescued by the soundness of Scala's core features.
You might wonder why I didn't name it ->. And indeed, I could have.
scala> object -> { | def unapply[A,B](tup: Tuple2[A,B]) = Some(tup) | } defined module $minus$greater scala> val a -> b -> c = 1 -> 2.0 -> "3" a: Int = 1 b: Double = 2.0 c: String = 3
Isn't that pretty? I don't know why this isn't defined in the standard library already, perhaps it's been discussed before. I think it would be useful, and I would have used that here instead of defining my own extractor. It would promote the pattern of nesting tuples, rather than defining 22 methods while taking a shot of vodka after each one.
(I am also aware, vaguely, that I have wondered onto ground inhabited by HLists. Don't shoot! I come in peace.)
In any case, an object -> is not in the standard library. I don't want to invite the possibility of a collision, should it be added later. And also, if nested tuples and -> extractors were a common pattern, it would be one thing. Since they are not yet, I don't want to have to explain to people why & is used to join directives while a -> extractor is used to split them.
All told, I'm very pleased with the resulting API for parameter directives, and hope people will get good use out of it. Directives are now fully documented and Unfiltered parameter validation is finally done.
> “The security at airports has increased so the bad guys are now traveling on the trains and buses,” said Robinson. Remember when airport searches were about preventing hijackings? We carved an exception out of the 4th amendment because of the extraordinary danger of uncontrolled weapons on airplanes. It seemed like a good idea at the time. We didn't do that because we wanted to catch "bad guys" by searching every traveler. It is actually something that the "bad countries" of history have done, you know? > About thirty minutes before the train departed, all passengers were asked to leave their bags on the boarding platform while two different detection dogs, one for narcotics and the other for explosives, sniff the luggage. Nice and tidy, like that. One dog for the TERROR WAR and another dog for the DRUG WAR. Pretty soon we are going to need more dogs! Also this happened in January, but only today did I find it and receive my weekly dose of government horror. *Also I have another blog where I post this kind of thing, but the Tumblr share widget keeps defaulting to Coderspiel. It seems wrong to go back and delete misdirected posts, like I did last time this happened. Sorry if this is too political and not computery enough.*
> Sober or not, operating a boat at speed in complete darkness is a recipe for disaster. > In large inland lakes, drunk boaters plow into docks at night all the time, and even die in the process. Guess who's fault it is? Theirs. Guess who's fault it isn't? The dock owners. > Why? You're not supposed to go fast during the god damned night time. That's right.
This release brings referential transparency to the Req type by making it an immutable builder of RequestBuilder — that’s right, Spring fans, it’s a factory-factory.
Previously Req was a type alias for the RequestBuilder class of the underlying async-http-client library. This class is...
Vaio Pro, for programming
My ThinkPad x201 served me well for two and a half years, with its barnstorming speed, small footprint, and long battery life. It's also heavy as a brick and about as thick, at least compared to the standard set by the Air and newer PC ultrabooks.
My work laptop for a year now has been a ThinkPad X1 Carbon, a remarkable piece of work by Lenovo after so many years of ThinkPad design stagnation. You can't really say anything bad about the Carbon hardware — except that with a 14" screen its footprint is bigger than what I want in a personal laptop. The legendary 12" PowerBook spoiled me for life.
So I was looking at smaller ultrabooks and was very nearly tempted to buy the Dell XPS 13, a nicely made and practical system. But I hesitated. Would I regret not having a touchscreen in a year? Was it small enough? I dithered until the Haswell architecture was announced, and then I was glad I did.
I don't really follow computing hardware news unless I'm shopping for computing hardware, so I only know that Haswell is supposed to be a "game changer" in battery life. It does seem to be that.
I ordered a black Vaio Pro 11" with an i7, which is a custom build so I had plenty of time to anticipate its arrival. FedEx generously delivered it on July 5th, giving me a long weekend to shove Ubuntu onto it. More on that in a bit.
First, the hardware. This is easily the lightest computer I've picked up. The impression you have in handling it is that there is no battery at all.
It is a plastic case as I prefer, but it's a very different feel than the X1 Carbon. Sharp edges, glossy plastic. It reminds me of high-end 1990s electronics, which makes sense given it's a Sony. It's not a bad case at all, but it doesn't feel as solid and precisely made and classy as the X1.
The keyboard takes some getting used to. It's chicklet style with short travel and high resistance. It's very different from the x201's old school (and good) keyboard, and the X1's MacBookish chicklet keyboard. This one is weird. But it's a keyboard, I don't fuss over them, I adapt.
The trackpad is not bad at all. It's a little less supple than the X1's but a hundred times better than the X201's and every other crap trackpad that laptops used to be equipped with. (And unlike the X1, I didn't have to fiddle with synaptics to get it not to jitter like a cokehead in Ubuntu.)
The Vaio's display is higher resolution than the X1's at a much smaller size, 1,920 x 1,080 in 11.6" diagonal. Its video output port is HDMI, for some reason. So I bought a VGA adapter for presentations, I just have to remember to bring it since everyone else is on Mini DisplayPort.
There's a fan that comes on if I put the i7 to work. Wrrr.
Okay enough about hardware, I want to talk about Windows 8.
Windows 8 is the most impressive development in software in years. It is actually really, really cool and everybody should respect Microsoft for making it. It is wayyyyy bolder and more interesting than what anyone else has done in "desktop" interfaces lately. I'm not even kidding.
Using the touchscreen with Windows 8 is a first-class phone/tablet experience. Tapping, swiping, all that stuff produces smooth animations and, you know, delight. You forget you're using a "computer", it's like you have a fancy tablet held up by a hinged stand with buttons on it.
It was cool. I could see myself using it. I could see myself using Internet Explorer and Bing with it. SERIOUSLY.
Except, you know, Windows. I can't do it. I can't do Windows, I can't do Mac OS X, I can't do anything that doesn't come with a supported open source packaging tool so I can have a controllable and uniform development environment across all my systems for the rest of my life.
And as you've probably heard, experienced Windows users theselves feel like aliens using Windows 8. The interface for legacy Windows apps is really pretty degraded. I repeatedly had difficulty performing simple tasks like scrolling through selection lists, and I'm not that stupid. Microsoft broke some stuff.
Still I think they made the right decision, hands down. If I were a Windows person I would complain about how badly the legacy interface behaves, but I would actually use and benefit from the new interface too. It's the future, get used to it.
Getting Ubuntu installed on the Vaio Pro is an epic, at present. I butted my head into the wall for a while before finding this blog post that tells you how to do everything. Not only do you have to install a development version of Ubuntu,, you get to compile your own patched kernel! And more!
It's a lot, but I actually feel lucky that everything worked in the end. I've recklessly bought new model laptops in the past only to find that the wifi could not be made to work reliably in Ubuntu for many months. Things are getting a lot better, in terms of Linux compatibility, mostly thanks to Intel. Yay Intel.
Unfortunately Windows 8 was a casualty of one my late night Ubuntu installation binges. I deleted it completely by accident. It's sad because I was planning on playing around with 8 a bit more, and demoing it to people like me who forgot about Windows. Oh well.
Ubuntu runs well on this machine, with one caveat: the high DPI screen makes Unity feel clunky and ancient. Kind of like legacy mode in Windows 8, except it's all you've got. I hadn't anticipated this and am slightly bummed about it.
I've bumped up font sizes all over the place, but still some UI elements are stubbornly tiny. Chromium's tab bar is a ridiculously thin strip. And my precious touchscreen: useless. I mean, it works, but you wouldn't use it for anything. Compared to how it performed with Windows 8, it's like my Vaio lost a secret talent it had. I want to flick out that panel from the side of the screen!
I feel sorry for Unity, and GNOME, KDE, and everybody actually. These are dead-end interfaces. Microsoft of all companies has brought the future here. Laptops will be with us for a while but a laptop without a good touch interface is going to feel like half a laptop.
Mac OS X at least has resolution independence, but that's just putting the WIMP interface on life support. Windows 8 did the painful, right thing, and you will see Apple follow soon enough. Canonical should do the same with Ubuntu Touch. It's going to take that or some other hybridization with Android, or Tizen, or Chrome OS — but you have to walk away from years of work on GNOME. Some upstart distribution might do it first and dethrone Ubuntu.
Anyway, none of that matters as far as comparing this to any other laptop. It has a touchscreen, lying in wait until I again have software that puts it to good use. With some fiddling I can read everything on the screen, it's just not always pretty or easy.
So how about it, X1 Carbon vs. Vaio Pro 11? For my home machine I have no regrets with the Vaio, it's the size I want and there's no ThinkPad in the same ballpark. Despite its impossibly light weight it lasts longer on battery than the Carbon. Neither machine has unsolvable issues with Ubuntu, but on the ThinkPad you can install a stock 13.04 while the Sony requires all sorts of gymnastics (for now).
If you aren't a tiny laptop addict, you should probably go for the Carbon. It's super solid, respectable, has a straightforward Ubuntu install, and a big screen with a traditional pixel density. The battery life is okay, not remarkable. It recharges fast.
Lenovo takes their sweet time but they'll probably have a Haswell X1 Carbon with better battery characteristics by the end of the summer. And some day/year they'll surely equip their small-footprint ThinkPads with a modern, Carbon-type case. You'll hear about it all over the programmer internet.
But the Vaio Pro is already of that next generation of computers. Riding the bleeding edge of Linux in exchange for a devilishly small and light machine sounds like a faustian bargain, but so far my soul is intact.
Vaio Pro 11 first impressions
Scala in 2007 - 2013
One of the highights of this year's Scala Days, for me, was Shadaj Laddad's talk on Fun Programming in Scala. His unabashed love of programming reminds me why I started to learn it myself, at a later age when I had saved up enough money to buy a C++ development environment. Today anyone can download compilers for free, for any programming language. To be honest, I'm jealous of kids who now get to learn Scala as a first language.
I was surprised and proud to see how enthusiastically Shadaj was using giter8 to efficiently create new projects. When I made giter8 I was mostly inspired by the automation, efficiency, and evident pleasure that Ruby programmers took in automating mundane tasks like making a new CRUD website.
I'd tried to do the same with Java tool-chains in the past, mostly Maven's archetypes, but found them to be over-architected, and under-designed for creation and maintenance. To eliminate grunt work for the end user you had to do about 10x the grunt work as an author, not exactly a formula for success in the unmarket of free software.
Sprung
The other cool thing about giter8's appearance in Shadaj's talk is it lifted my spirits a bit from what I'd been hearing about the morning's keynote, which I hadn't seen. Apparently Rod Johnson, Spring Framework author and recent dependency injected to the Typesafe board, had used Dispatch (my first Scala software project and one of my proudest creations) as an example of what not to do in Scala.
I wasn't going to watch the talk myself or write about it. It was the same old criticism of an old version of Dispatch, a programming style argument that never interested me and was ultimately easier to leave behind.
But after the videos were posted, inevitably, more of my colleagues saw the talk, and talked about the talk. Friends spoke up for me, and for Dispatch. I started to imagine that the criticism was very harsh, that it was personal, that it was something I should worry about. So yesterday, I watched Scala in 2018.
Johnson doesn't speak in the sneering tone I'd imagined. There's no emotion, no blood at all. His criticism of Dispatch isn't cutting, it's very general. Dispatch and libraries like it, he informs the audience, simply should not exist.
I have to say I was surprised when finally watching the video just how flatly Johnson ignores the fact that Dispatch was completely rewritten over a year ago. There is no point in listing the differences; they are many and self-evident. I invite you to watch the "Dispatch" portion of the talk with the Dispatch home page open in another window, and see for yourself.
I've given a few talks and I know the work that goes into every beat. It's time consuming; many hours of work go into a one-hour talk of any quality. So it's astonishing to me that a fundamental error in one of the longer chunks of the talk, and which is used to support one of its primary themes, survived Johnson's own review of his notes. Whether Dispatch is, still, a wrapper for Apache HttpClient, with dozens of symbols that cause right-thinking enterprise developers to blush politely -- these are facts you can check in ten seconds.
Further, you might expect a secondary review by someone more familiar with the Scala community, when a keynote puts its critical focus on that community. It's how you avoid these kinds of blunders, and how you later avoid having to say, "stop talking about my blunders and instead please discuss my important message." (We'll get to that.)
But for the record, if anybody is weirdly still interested in the full story of the Dispatch rewrite that I completed early last year, I wrote a series of posts about it. If not, that's great! Neither am I.
The road to perdition
Johnson makes his thesis plain. In five years he hopes Scala is a leading programming language for traditional enterprise apps. He doesn't have the same hope for startups. Or for front-end (web?) programming. This stated goal is the motivation for his criticism of some Scala libraries and of the ways that some people participate in the Scala community. He projects a series of beliefs and behaviors onto the community using the trusty myth list formula.
One must imagine all the hard core Scala programmers champing at the bit, or champing like that doctor zombie in the B wing of the W.H.O., to debate these "myths". But before tearing through the glass, it's important to ask yourself: do I want to be where Rod Johnson wants me to want to be in five years? Do I want Scala to be less popular in startups, more popular in traditional enterprise settings?
For me, debating how to get there would be like arguing over the fastest route to the gulag. If that's where this bus is headed I don't particularly care how it gets there, I just want off.
Luckily, that's not where the bus is headed, at all. Johnson greatly overreaches in discounting the use of Scala in startups, allowing only that some startups may experience a "Twitter scenario" where they are successful and have to scale.
Like many things in this talk that is purportedly about the next five years, the "Twitter scenario" is from five years ago. Today there's no shortage of startups using Scala from the beginning, in New York and elsewhere. You don't have to look any further than the list of sponsors for Scala Days to see some of them. There's no reason to think, and less reason to hope, that Scala's popularity in startups won't continue.
Aside from that, if you value a healthy open source community for its own sake, you might want that as a target. If you value projects that do something creative, cool, beautiful, or clever, you might want to list that. If you want Scala to be used as a teaching language in public schools, you set it as a goal for 2018.
In other words, you might want some things for Scala that are outside your immediate career interests.
Alternative mythologies
Suffice to say that my hopes for Scala are very different from Johnson's, and so would produce a very different set of guidelines. Or perhaps, none at all: it's your day off, your computer, your electricity -- do what you want.
I'm not going through the whole values exercise here, so I'll end with this thought: when I escaped Java five years ago it wasn't entirely, or even mostly, about the language. I was stifled and utterly uninspired by the Java Community -- the same one Johnson puts up as a stretch goal for Scala programmers.
I don't feel there is a Java community so much as there is a Java audience, in the sway of one "thought leader" after another, chasing one magic bullet after another, always one enterprise software purchase away from the ultimate salvation of not having to program.
But there are many programming language communities where individual freedom of expression is prized, where experimenters are praised for their successes and their failures, where we all thank our lucky stars we live in an era where programming exists and we can do all we want for free. For me Scala is one of these languages.
Having seen a few other talks that same morning, I don't worry about our future in the slightest.
> Need to use Dispatch 0.10.1 to access a non-standard URL? Use this custom async-http-client RequestBuilder to set your URL the way you want it.