Exploring HTTP/2

exploring-http
Project

I have been speaking about HTTP/2 for almost a year now,
and throughout that time I have stated over and over again:

As a community, we are still trying to figure out best practices. The examples I’m showing here, are ideas. Nobody has figured this out yet. We [application developers], web server vendors, and browser vendors, all have a role to play in exploring HTTP/2 and informing each other about what we find.

I want to make clear that while this post talks about PHP specifically, the issues are present in most server-side languages that use FastCGI or similar models.

When I first started to explore HTTP/2 Server Push, my first thought was that PHP would not be able to do server push. PHP has a single output buffer, and its interface (the Server API, or SAPI) with the web server is built around that — whatever is output to the buffer, is then served to the end-user.

Exploring further, it became clear that those discussing and trying to figure out how to deploy the HTTP/2 spec have solved this issue by using Link headers.

This would then inform the web server and it would then be responsible for making sub-requests and pushing the results out to the user. By doing this, we avoid the issue of PHP having a single output buffer, by treating each push as if it were a unique incoming request. PHP is none-the-wiser.

Why We Need a New SAPI

Most of the existing exploration with HTTP/2 is focusing on websites (or web applications), rather than APIs. These two applications can vary a lot in their performance needs.

With a webpage, it’s typically a bundle of independent resources that are either data to display (HTML, images, fonts), meta-data to describe how to display it (CSS), or application code to make it dynamic (Javascript).

An API, however, is typically comprised on discreet resources with references to one or more other discreet resources.

While they may seem similar on the surface, the difference is that an API is surfacing a data-structure, while a webpage is a single flat document.

The APIs data-structure is often based on a datastore that supports efficient fetching of related resources.

Take the example I use in my talk, a blog API. A blog post might be comprised of:

  • The blog post itself
  • The author information
  • Related comments
  • The comments author information

We can imagine a couple of SQL queries like this:

Which, in a perfect RESTful world, would result in a number of seperate resources, each with their own URL (and therefore, separate request):

── post resource
    ├── author resource
    │   └── author avatar image resource
    └── comments collection
        └── comment resource (per comment)
            └── author resource
                └── author avatar image resource

Due to the current world of HTTP/1.1, we likely would at best split this into two resources (which happen to match up with our SQL queries up there) with each of the sub-resources embedded like so:

── post resource with author resource
    └── comments collection with each comment and author resource embedded

More often than not, we’ll just flatten the entire structure to a single post resource.

This is a trade-off we have to make in the name of performance.

If we wanted to move towards the first model, we would end up having to do many small queries at each layer of the structure, which could be very inefficient, or duplicate effort — especially if we need some of the sub-resource data to generate the resource URLs (think: pretty URLs using the authors name for author resources).

So, what do we do? We can cache the intermediate information for later retrieval by the web server sub-request, or we can write a SAPI that supports responding with the request resource, and subsequent pushes.

This however needs web server support.

Currently all SAPIs are based on the original CGI single request/response model. We need to move beyond this.

We need a new web server interface that supports multiplexing from the application layer, and we need PHP to be able to multiplex it’s output.

Additionally, we are going to want to control other features available in HTTP/2 dynamically for those multiplexed streams, such as stream weights and dependencies.

That Sounds Hard!

To do this would require a large effort on the part of many projects — on the scale of creating the original CGI spec, bringing to together web server vendors and language authors to decide upon a standard way to handle multiplexed communication.

We also don’t know how effective having these abilities would be.

Browser vendors are still figuring out the best practices for handling what is now a much more complicated priority tree, and re-building rendering around it.

Because it’s difficult to do so, there’s few sites taking advantage of these features yet for them to make anything more than an educated guess how to do this.

New Application Architectures

Additionally, we’re going to have to explore new application architectures, that feature asynchronous and parallel processing to create and output these multiplexed streams.

Introducing The HyPHPer Project

For the last few months I’ve been looking at the Python Hyper project, a series of libraries for handling HTTP/2. These libraries are for building concrete HTTP/2 clients and servers upon, and do not have any I/O — making them framework independent.

I have decided to try and port Hyper to PHP, as HyPHPer.

The goal is to provide a base for writing both servers and clients to explore both writing applications that can handle multiplexed responses, and documenting current browser behavior and performance implications of different response profiles.

We can then attempt to determine current best practices for performant web applications.

Current Status

During the PyCon AU sprint days I managed to port the Hyper HTTP/2 frame implementation (hyperframe) entirely to PHP — including tests.

This package is now available on Github and Packagist.

Still to be completed are:

  • HPACK
  • Priority
  • H2 Full Protocol Stack

If you’re interested in helping migrate these packages to PHP so that we can explore what HTTP/2 means for the future of PHP, let me know!

Your Slides Do Matter

colors-wide
Project

At almost every event I’ve spoken at in the last twelve months, somebody has asked me how I created my slides. Typically they are talking about my code samples, but I wanted to cover all aspects of creating my slides.

You Are The Star, Not Your Slides

It’s often said that you, as the speaker, should be the star of the show. Your slides should be inconsequential, a supporting act. If the projector dies, your audience shouldn’t miss out on much.

While I strongly agree with the sentiment — after all, you’re paying to see the speakers — I disagree that your slides don’t matter. Your slides can be a valuable part of the experience for the audience.

Different Ways To Learn

Thanks to Heather White and her fantastic talk at Lone Star PHP, Stepping Outside Your Comfort Zone: Learning to Teach, I learned there are seven different majors ways that people learn:

  1. Visual
  2. Physical
  3. Aural
  4. Verbal
  5. Logical
  6. Social
  7. Solitary

I try to hit as many of these as possible with my talks, I use photographic title slides to give visual learners a keyframe to hang on to, as well as bright colors and diagrams. I speak out loud for aural learners, and often my talks are recorded. Physical learners can take their own notes, and I encourage questions for the social learners.

Presentation Technique

There are two different presentation techniques I see commonly used.

Bad Subtitles

As some of you may know, I have a hearing problem. Before I got hearing aids, going to a talk was hit-or-miss. If the speaker or the PA was too quiet, all I had to go on were the slides.

Due to my hearing loss, I always (even now, with hearing aids) watch TV with subtitles/closed captions on — I find that if you put too much content on your slides, they start to become bad subtitles for your speech.

I get distracted by differences, and I focus more on the words on the slide than what you’re saying. Combatting this scenario is why the advice de-emphasizing your slides is often given.

No Content

To counter this, some speakers have started to do slides with just images/gifs, sometimes adding a word or a title. This can be entertaining in person, but I get frustrated when I hear about a great talk and the slides end up being 25 pictures of cats with random words — there is no take home value, and nobody is going to retain everything you say.

Splitting The Difference

I try to blend these two techniques by having a title slide that I talk to, and then behind it I have bullets with more details that I usually skip passed during the talk. This slide serves three purposes:

  • Speaker notes: I set Keynote to show only the upcoming slide in the speaker view, and I use the bullets as speaker notes
  • Clarification: I can switch to that slide to clarify something, e.g. a class name
  • Take home: when I upload the slides, I leave those in, they summarize everything I’ve said

Design

Fonts

The most important thing for a slide is legibility, so I spent a lot of time picking fonts. I have gone with the “Source Pro” family from Adobe, these comprise of Source Sans Pro and Source Code Pro. There is also the most recent addition to the family is Source Serif Pro, but I try to limit my slide to two fonts.

One of the benefits of these particular fonts is they come in a bunch of weights. This gives the ability to emphasize (semi-bold, bold, black weights) or de-emphasize (extra light, light) portions of my text.

These fonts also hold up well at font sizes small and large, and in print. I use Source Code Pro every day in my IDE, Editors, and terminals, and have blown it up to fill a wall on a projector.

In fact, I use these same two fonts on this site, Source Sans Pro for titles and body text, and Source Code Pro for code (I override the default gist stylesheet for this!).

I try to make sure text is no smaller than 60pt, but will go down to about 52pt if necessary.

Colors

The most important thing about colors is contrast. I have found that on-the-whole, bright colors on a dark background stand the most chance of being visible on projectors, but I did have to invert my slides for one conference this year.

I ended up finding the monokai theme that is available for most every highlighting tool, IDE, editor, and terminal out there. The monokai theme consists of five colors on a grey background, to which I added the Akamai blue and orange (branding++), and white for titles/text.

Monokai Colors

I use these eight colors for all code, graphs, and diagrams in my slides:

Monokai Colors Slide Examples

Title Images

As mentioned, I use images on my title slides. I generally go for images that are visually striking, and are in some way related to whatever topic the slide is about. For example, for HTTP/2 Server Push, I used a picture of a push mower (and in fact, replicated that in this post on server push).

I find most of the images I use by using the Flickr advanced search. I set license to “Commercial use allowed” (I find conference talks a grey area and decide to err on the side of caution), ensure safe search is on, and change the sorting to “Interestingness”. Sometimes I will also open the Advanced panel and choose only portrait orientations and large images.

If this doesn’t yield usable results, I will fall back to Google’s image search. Here, you can also set your license under “Search Tools > Usage Rights” where I choose “Labeled for reuse”, I set “Type” to “Photo”, and “Size” to “Large”.

I find this typically will yield a visually striking image that I am free to use so long as I follow the rules laid out by the (typically Creative Commons) license.

I will then place them on my slide and change the transparency — due to the dark grey background behind this will darken the image, until I feel there is enough contrast between the image and the title text

Code

Lastly, the code. I do try to limit the amount of code I put in my talks, I don’t think slides are the best medium to learn to code from. This does mean that I try to make my code as easy to consume as possible.

Syntax Highlighting

As I mentioned, I use the monokai theme (or monokai-light for the inverted one), and I use the python Pygments library to perform the actual highlighting. Pygments has the ability to output RTF formatted output, so I have a function in my bash setup that will take the contents of my clipboard and replace them with a syntax highlighted version of the same text:

This function will take the first argument to specify the language to highlight (defaults to PHP), will set the “startsinline” option if necessary. It then pipes the output of pbpaste into pygmentize, passing in flags to set the format -f rtf, and then sets the theme (style), font (fontface) and font size (fontsize) options with the -O flag: -O style=monokai,fontface=Source Code Pro,fontsize=120. The fontsize is double whatever the pt size would be, as I mentioned earlier, I try to use 60pt, so it’s set to 120.

Lastly, it pipes the output to pbcopy allowing me to just Cmd+V and paste it back into Keynote.

I still call it like key for PHP code, or say key json for JSON instead.

Example Highlighting

I also use animation to highlight code as I step through it — this is what I think makes most people ask what tool I use.

You can see an example here:

The purpose of this is to ensure that the audience can follow along as I explain the code. I will also show larger blocks of code and zoom in and pan around, I believe this helps maintain context in larger code examples.

I do this entirely in Keynote. This effect is created primarily with three shapes, the first two are both black rectangles set to 65% opacity for obscuring the upper and lower parts of the code, while the third is transparent rectangle with a white border, allowing me to “ring” the code I’m highlighting.

I used to “hand” animate each stage of the animation, painstakingly adding animation “Actions” to move and resize the three independent elements, but I have now come up with a better technique: Magic Move.

Magic Move is a slide transition that will cause Keynote to automatically create an animation between matching elements on two different slides. So now, rather than have one slide with 30+ different animations, I duplicate the slide, move the elements where I want them to end up for that step, and turn on Magic Move for the previous slide:

Magic Move Slides

This drastically cuts down on the time it takes to do the animations and the animations end up being smoother as a result. I no longer dread wanting to change the code on any given slide.

Live Demos

My typical mantra when it comes to live demos is: don’t.

But, there’s something entertaining about a live demo, and I’ve never met an audience who wasn’t forgiving about issues. My hard and fast rule is to never do a demo that requires internet connectivity.

When giving live demos I always make sure I have a screencast of the same demo, as well as screenshots — worst case scenario I can flip through slides and give the play-by-play.

More recently I’ve been building my demos inside docker containers; this gives me a simple and easy way to manage them, and ensure I’m getting repeatable results every time.

Time & Effort

I put 120+ hours into every slide deck I create, and I continuously tweak them before every presentation based on the previous rendition. I strive to make them as entertaining and educational as possible for the live audience, while also being informative for the passive audience.

Not Without My Slides

While I can give my talks withouts my slides — everything I say in my talks has at some point been part of a conversation at least once I’m sure — I truly believe that they enhance my ability to teach dramatically. At the end of the day, the goal is teach something, and as we learned above, different people learn in different ways.

So, yes, I can give the talk without my slides, but I’m potentially leaving people behind if I do.


Thanks to Katie McLaughlin for reviewing this post.

Community Relations: Not Just a Megaphone

15092861213_ea0e1b1299_b
Standard

In my last post on Building a Community Presence I went over some of the approaches for engaging communities. This time I want to talk about what the actual job of a community builder is.

On Selling

The role of the Community Builder is to sell you on the idea of the company and the product. Not necessarily to sell you the product. Often times what we’re selling are not bought without some decision making process behind them, so a sale is not going to happen then and there anyway. Provide the education, and build the trust, and the sale will happen.

Why I Hate Evangelists

I personally have a visceral reaction to the title “Evangelist”, which is unfortunate because it’s on my business cards.

The issue I have with the title is that when I think of Evangelist, I think of a fanatic who will use whatever methods they can to convert you to their way of thinking. And yes, there are Developer Evangelists who will do this. They are bad community members in my opinion, and I don’t trust their opinions.

The truth of the matter is that there is no single solution to any technical problem, and anyone who tells you differently is lying.

I prefer the “Advocate” title, or better yet, the one I chose for myself at Engine Yard, “Community Engineer”.

Advocating For The Company/Product

Being an Advocate is important, advocating for something in my mind means to put its best foot forward. This meant that while at Engine Yard, I probably sent more potential customers to our competitors, because I knew that had they tried our service they would’ve been disappointed for their current needs. Engine Yard is an amazing platform, but it’s complex (due to its customizability), it often requires some changes to “cloudify” your apps, and it can be comparatively expensive if you’re at the small end of the scale.

If you fall into the segment of the audience that doesn’t need the customization and support, then you’ll find the product over-complicated, and probably, over-priced. That is not a positive experience, go use Heroku instead (for now). If you fall into the other segment, then you’ll find Heroku to be constraining and expensive instead, go use Engine Yard and you’ll find it to be the right product for you.

I believe that honesty plays a big role in advocacy. If people are to trust what you say, you must be able to not only point out the flaws in your competitors, but your own as well. If you fail to do so, and you knowingly “sell” a customer a product that isn’t a good fit, they will have a negative experience and that’s what they will broadcast to their peers — that, to me, is an anti-sale. You stand to lose far more business that way.

If you instead sell them on the right product, even if it’s not your own, while at the same time educating on both products, in the future they will seek you out, and self-select into being a customer. At the very least, they will seek your advice again.

This still falls under the megaphone part of the role. And is probably the less valuable part of the role to your employer. They, realistically, can have similar results using traditional, informed marketing channels. And those flights are expensive.

Advocating For The [Potential] Customer

The most important part of the role is advocating for the customer, or potential customer. This is why I like the title “Advocate”, it cuts both ways. You are an advocate both to, and for, your customers and potential customers.

As an advocate, you are the person (or one of the people) closest to your customers. You attend the same events, you go out to eat and drink together, you attend the after parties together, and you commiserate the early morning afters. You also have conversations, about life, news, the weather, and work.

You hear their gripes and you are the conduit for upcoming news that affects them, negatively or positively.

I firmly believe that the most vital part of the role is creating a feedback loop, directly from your customers to your product team. Your job is to aggregate requests (“I’ve been 14 events so far this year and every time someone has asked about X”), spot trends (“Lots of people are starting to talk about Y, here’s what it might mean for us”), and raise concerns (“People find this part of our product confusing because Z”).

If your community team does not have a direct line back to your product team you are missing out on the most valuable information they are collecting.

Advocating For The Community

As mentioned in the previous post, as part of your role building relationships, and as the person most entrenched in the community, you should be working directly with marketing on messaging and interactions.

You should have the power to veto any messaging your company puts out to your community — after all, your knowledge of the community is one of the reasons they hired you. It’s in the companies best interests to have you invested in their messaging: you are the one on the ground who has to deliver it after all.

Advocating For Yourself

At the end of the day, being a community builder is a job. I personally won’t take on the role unless I believe in the product, because I want to enjoy my job, I want to help people do their jobs better, and I can’t do either of those things with something I don’t use, or don’t care for.

This is also part of the reason I didn’t look for new roles in the same space when switching jobs, it feels disingenuous to the community to “sell” them on one product one week, and then on a competitor the next. My paycheck doesn’t buy my loyalty, the company values and a product I believe in do.

I hope this is obvious when I interact with people, and I believe it’s a necessity to be good at this job. We must maintain our intellectual honesty and community integrity. Your reputation is your true value, for your company, for your community, and most importantly, for yourself.


Thanks to Tessa Mero, Austin Gunter, Margaret Staples, Katie McLaughlin, and Amanda Folson for reviewing this post.

Image courtesy of Chris Combe

Building A Community Presence

14234963580_e220260760_h
Standard

Your company has decided it needs to “do community”, whatever that means, you’re community manager 1 number one, what now? From my time as a developer advocate/evangelist under both marketing and engineering teams, I have come to some conclusions about how to build community presence. Though my experience is mostly with technical communities, this should apply pretty well to any community building.

Identify Your Products Potential Audience(s)

You’ve probably already done this to some degree, after all, to build a successful product you need to know — and understand — who you are building it for. But, if your target audience is “developers” that’s marginally better than saying your audience is “humans”.

Chances are your product is going to fall into one of two categories: a tool built for use with a specific language (e.g. the excellent greenkeeper.io for keeping your node.js npm dependencies up-to-date), or one built for most every language (e.g. Travis CI for continuous integration).

If you fall into the former category then you’ll have a much easier time narrowing down the identity of your audience, your product itself defines it. For the latter, you need to recognize that you have multiple (albeit similar) audiences, and working with, say, the PHP community will be different to the Ruby and Java communities for example.

There are, in my mind, three approaches to building your community team:

Breadth First Evangelism

If you have multiple communities you wish to target, you may decide that the most important thing to do is to get your name and your product out there in front of as many appropriate eyeballs as possible.

In this case, you do not have the bandwidth to forge deep, meaningful relationships with every community in which you want a presence. You cannot physically be everywhere at once!

These kind of superficial touches will be things like conference sponsorships that involve little more than plunking down some cash and getting your logo/booth at an event, maybe you sponsor an open bar, or an unconference track. You give away the pens and the t-shirts, and the other branded swag, and you leave.

You write a bunch of high level blog posts, you make sure you have docs that have examples in a multitude languages, and you go home, job done.

This is a great way to start, but mostly superficial. You will not identify your community advisors, you will not create long lasting relationships, and you will get less valuable/actionable feedback.

What you should be doing is trying to identify communities in which you want to foster deeper relationships. Which communities you either want to pay special care too because they need more of your time to understand your product, or better yet, the ones that are getting traction and starting to provide interesting insights, use cases, and feedback. This leads to the next stage:

Depth First Evangelism

If you have the second type of product (i.e. specific to one community), or, you’re ready to start deepening the relationships you have started to cultivate through your breadth first evangelism efforts, it’s now time to start concentrating your efforts – and expanding your team.

This is when you will want targeted evangelists: a PHP Evangelist, a Ruby Evangelist, a Python Evangelist. These people bring their knowledge of a particular community to bear in determining what your messaging should be and how your interactions are orchestrated to achieve the community relationships your product needs.

Now you will start to do things like working with a conference organizer to create a sponsorship that provides much more value to the attendees, and can actually shape the feel of an event. You will start to sponsor open source work that is relevant to your company, and you will sponsor initiatives like diversity groups, and scholarships. You will make meaningful contributions to the community.

In return, you will get the insights in to your product that help your improve it, you will find your most hardcore users, your beta testers, and your contributors.

You will continue the blog posts, but they will be much more in-depth and targeted, based on feedback from the community about what they need to know, rather than what you think you want to tell them. Your documentation too will follow suit – perhaps you put together a book around using your product as members of that specific community.

There is an alternative to depth first and breadth first, which is to localize your efforts:

Localized Evangelism

By constraining your geographic area, you are reducing your audience in the same way as in depth first evangelism, but you can still hit many different communities within that geographic area. To be clear here, I’m talking about an area which can be traversed within a day.

This allows the kind of deep relationships that add real value, but across a broader section of your audience.

This hybrid is potentially easier to scale, it certainly cuts down on travel costs, but, you are less able to take advantage of your existing community presence (which is likely limited to one or sometimes two communities) or your breadth will be biased in those directions.

It’s Just The Megaphone

This is what I call the megaphone part of the role, the evangelistic, outward pushing part. But it’s just one part of the role. While it’s the most visible, I’d argue it’s not the most important. In the next post, I will cover the other aspects of the role that make community engagement invaluable.


Thanks to Margaret Staples and Jonas Rosland for their feedback on this post.

Image courtesy of The Greater Southwestern Exploration Company, used under a CC-BY 2.0 License.


  1. there are many job titles, community engineer, developer advocate, developer evangelist, etc., but they all involve some level of community presence 

Authoring With The Atom Text Editor

Project

Note: While this post starts with me gushing about AsciiDoc, I do also cover Atom + Markdown, skip down to here to read it.

I am always looking for better tools to make me more productive — whether that’s writing code, writing documentation, or writing books.

Most notably, I have mostly switched to writing everything in Markdown. With one exception: books.

While working with O’Reilly on my Upgrading to PHP 7 ebook (free) I encountered to AsciiDoc. I found it to be a much better solution for writing book-style content than Markdown: it already features syntax for the features added by LeanPub‘s Markua format.

This includes different types of callout boxes (tips, notices, warnings, etc), sidebars, includes, custom attributes, cross references, and more.

My favorite feature is the ability to include portions of external files. This allows me to write syntactically valid PHP files with open tags, that I can then test by passing it to php -l and/or running it (depending on what it does!), but by including specific lines I can omit the opening tag in the final output, or I can bring in specific lines and write about them as I go.

You can also add callout indicators. This works by adding <#> (where # is an integer starting from one), and if you precede it with a comment (e.g. // <1>) it will strip the comment syntax when rendering. You then use the same syntax

For example, given the file code.php:

I can include it, omitting the opening tag, and adding an annotation using:

Now, O’Reilly Atlas, the SaaS tool from O’Reilly for publishing books does some awesome things utilizing AsciiDoc, such as building the final book in multiple formats etc (and I highly recommend checking it out, you can still publish the end result on, say, LeanPub), but most of the features are possible locally using a tool like AsciiDoctor — which is the most popular implementation of AsciiDoc rendering, supporting HTML5, DocBook, ePub, LaTeX, and PDF.

OK, Enough About AsciiDoc, What About Atom?

Alright, so, what about authoring your content? One of the reasons I looked at Atom was that while Markdown has a multitude of available editors, AsciiDoc options are lacking. I started using TextMate with the AsciiDoc bundle, but I wanted live preview, so I switched to PhpStorm with the AsciiDoc plugin, but it didn’t seem to support the full feature set of asciidoctor, and it’s a bit overkill for the task.

After tweeting, I tried AsciiDocFX, but again lacking, specifically support for includes.

For my latest attempt to find authoring nirvana, I’ve moved on to Atom, and what’s what you’re all here for.

Ten 13 Must Have Packages for Markdown and AsciiDoc Authoring in Atom


First the Markdown packages:

3. markdown-writer

markdown-writer adds some conveniences to the authoring of Markdown, such as automatically continuing a list when you hit enter, and support for easier Jekyll authoring.

2. linter-markdown (and linter)

A linter for Markdown syntax

1. markdown-scroll-sync

This one is a must have, it will sync the Markdown preview (Ctrl+Alt+m) scroll position to the current scroll position in the document


Next, the AsciiDoc packages:

asciidoc-assistant

AsciiDoc Assistant is actually a meta-package for four other packages.

4. language-asciidoc

Syntax support and snippets for AsciiDoc (asciidoctor flavor), this is a requirement.

3. autocomplete-asciidoc

Autocomplete for AsciiDoc syntax. I don’t use it much, but it makes some of the more verbose blocks easier to type.

2. asciidoc-preview

An AsciiDoc version of the Markdown preview. Live updating, using and supporting the full range of asciidoctor features, and specifically, it supports includes.

This is as much a necessity as the syntax support.

1. asciidoc-image-helper

When pasting an image into an AsciiDoc document, automatically save it to a specified path and link to it.


Lastly, there are non-markup specific packages that make general writing in Atom much nicer:

4. Zen

A distraction free writing, full-screen writing environment. Hit Cmd+Ctrl+F on OS X, or shift+F11 on Windows and Linux, and it will hide most of the editor UI, including tabs (by default), status bar, file browser, etc.

It will also center your document and can automatically turn on soft wrap (which I recommend).

It works well with both Markdown and AsciiDoc preview panes, and also the Markdown document outline from the markdown-document plugin, and the linter.

I have made one small change to my styles.less to force the status bar to be visible even in Zen mode.

3. auto-soft-wrap

This addon will simple automatically enable soft wrap based on the file extension. You can add .adoc, .asciidoc, and .md to the settings to make it apply to those files. auto-soft-wrap is not compatible with the Zen plugin, but it’s own soft wrap setting works fine (it always applies though).

2. linter-write-good (and linter)

write-good is a naive linter for English prose, informing you of ways to improve your writing.

linter-write-good brings it into atom UI, using the linter interface it will highlight use of passive voice, words that weaking meaning, and words that was wordy or unnecessary.

For an example of how this can impact your writing, this article was originally written without it, and then re-written following it’s suggestions — you can see the diff with the original article here for comparison.

1. status-stats-jbrains

The last addon is the reason I keep the status bar visible in Zen mode, it shows you your word count, but also the grade level and Flesch-Kincaid Reading Ease scores. This helps to ensure that my writing isn’t unnecessarily complicated.


These plugins are available via the Atom UI, or apm. Hopefully these will improve your Atom authoring experience!

The Syntax of Tech Communities

image
Project

Just like the programming languages that are the centers of our communities, each community has its own set of rules and idioms — they have a real life syntax.

In my (almost) three years of being in a developer relations type role I have attended events in several communities and have observed how they differ from my primary community (PHP). As I’ve tried to understand those differences and the reasons behind them, I have had many discussions with members of many more communities also.

After attending my first PyCon (US) I was struck by just how welcoming and diverse the community is and had many conversations trying to understand why this is. This is not what this post is about. This post is about conferences specifically, and how communities place different priorities on different things when it comes to how they run, organize, speak at, and attend events.

PHP

The first thing I found that was unique about the PHP community is that every event which has an open CFP (e.g. not a local community day, hackathon, etc.) will reimburse at least some travel and lodging costs.

Larger community and for profit events will typically cover all flight costs and at least (number of talks)+1 nights hotel. Some will limit the number of international speakers to keep costs manageable, while others will have a fixed amount of costs they can cover, and the rest is up to the speaker.

In fact, the reason I started speaking at conferences was because of this, I would never have been able to afford to attend otherwise.

These may be paid up front with the conference booking the flights on your behalf, or they may be a refund after the fact.

In the last few years the smaller community events have (probably because it’s a default in OpenCFP) added checkboxes to their CFP which allow you request help for travel and/or accommodation. I think this change has come about as the number of speakers whose jobs are willing to pay their way has increased.

Additionally, you would get a full ticket to the event. Speakers do not get paid anything additional such as an honorarium.

Due to these costs incurred by events, ticket prices are usually somewhere from $150-350 with tutorial days at additional cost. A few events are much more expensive (~$1000+), but are typically not aimed at the community crowd.

Another thing I’ve noted is that it’s common for speakers to submit multiple talks to conferences and this is highly encouraged to people who wish to get a talk accepted: never submit just one!

As for the talks themselves, they are usually minimum 40 minutes (which is rare) and maximum 60 minutes. 50 minutes is the most common format.

Ruby

The community which I spend the most time comparing (mostly with PJ) is Ruby. The first major difference I noticed was that there were no speaker packages. I was outraged by this, it felt like the equivalent of “it’ll be good for your portfolio”.

Then I found out why this was: the ruby community values accessible ticket prices, and wants to have a low price point to make the event feasible for as many people as possible, including students, single parents, etc.

This would be impossible if speakers travel/hotel were paid for by the event.

Speakers are given a ticket to the event, and I’m sure for some its reason enough to speak, while others do it for the exposure and networking opportunities. But additionally they seem to do it to contribute back to the community as well.

Also, when it comes to CFPs, numerous people I spoke to early on were aghast to learn that in the PHP community we submit multiple talks. However it seems that this is changing, and it is becoming more common to do so.

Talks are usually 25 minutes, with the max being 35 minutes. Some are as little as 20 minutes.

Python

The Python community seems to be somewhere in the middle. While Python events do not cover a speakers flight and hotel, the PSF has a formal grant program which allows for those unable to afford to attend to request financial support to do so.

And while ticket prices are not as cheap as ruby events, they do have multiple levels of ticket prices, for example students, regular, and business tickets. For PyCon US, those were $125, $300, and $600 respectively. And everyone buys a ticket, including the event chairs, volunteers, and speakers. Again the PSF grant program can help here.

Submitting multiple talks seems to be the norm. And talks are 35-40 minutes long.

Perl

I have only recently started discussing this topic with members of the Perl community, but they seem to also value low cost tickets, speaker costs not covered, and 35-40 minute talk slots. I may be wrong on this but that’s what I understand to be the case. I am unsure if tickets to the event are given to speakers, or how CFPs are typically done.

There’s No Right Answer

It is obvious to me that community is hugely important to everyone I’ve spoken to about this, and I find it interesting how different communities have different priorities resulting in different conference experiences.

I will say that the Python way seems like a much more democratic and inclusive way to handle things, but that it depends on the PSF which is a core part of its community and requires monumental effort, oversight, and tons of volunteers. As an aside, the PSF also seems instrumental in the inclusiveness and diversity exhibited by the Python community.

At the end of the day, I think we all have valid motivations. The PHP way ensures that regardless of your financial situation, if you have something valuable to share then you can do so, while the Ruby way ensures that the most possible people can attend to hear what is being shared, and the Python way tries to strike a middle ground.

I do find it interesting how consistent things seem to be within communities, perhaps simply through early leaders setting the expectations, but it does show some of the differences of opinion on how to build a thriving community — and yet even with these differences, they are all large, thriving communities.


Image courtesy of Benjamin Horn, used under a CC-BY 2.0 License.

Thanks to PJ and Rae for reviewing this post.

The Visibility Debate

4609827016_1bb49bf82c_b
Standard

A lot has been said about when to use public/private/protected. Some say that you should never use private, while others say you should never use protected.

About the only thing that people can seem to agree on is that public should be used with caution. That your versioning should be based around your public API, and you have a responsibility to maintain that API.

The issue is mainly around the maintenance responsibility of protected vs private. Given that other developers can depend upon protected, it can effectively be considered to have the same overhead as your public API.

I personally very rarely use private properties, and I default to protected.

My thinking is thus:

  1. The public API is sacrosanct, following SemVer, breaking the public API shall bring a new major version. I will do my best to support an existing public API for as long as possible and only break backwards compatibility as a last resort.

  2. The protected API is important. It does not impact SemVer, though often accompanies changes at the X.Y level. However I pledge to document any changes in release notes and otherwise, to ensure that if you pay attention you will be able to track changes and avoid issues. I will make changes as necessary to the protected API. (n.b. as I write this, I’m considering making it a rule that X.Y should be incremented for protected changes, always)

  3. The private API is the wild west, and you’re on your own.

I think this is a pretty pragmatic approach, it preaches caution and engagement when extending third party libraries, yet does not needlessly prohibit the ability to extend entirely.