Vance Lucas

Don’t Transpile JavaScript for Node.js

Transpiling JavaScript on the client is pretty much required for wide browser support, but transpiling JavaScript when running it server-side with Node.js is entirely optional, and I strongly recommend against doing it.

Node.js ES6/ES2015 Pains

On a current project I am working on, I started out with full ES6/ES2015 all the way using import/export, etc. transpiling everything in my src/ folder with babel into a build/ folder before running it with node. As I continued working and saving files, however, it became incredibly tedious waiting for the babel compile step before I could see any changes. It felt like I was using a language with static typing instead of the dynamic, fluid JavaScript I know and love. Using a tools like nodemon became a chore, and always with a built-in delay and extra learning curve.

Some people resort to (and even recommend!) using babel-node as a way around this, blissfully running their new ES6 hotness, but inevitably run into the “Can I use this in production?” question when it comes time to deploy their code (hint: No. You shouldn’t. babel-node is not meant for production use). Congratulations. You have now created a different runtime environment for development vs. production, which is a huge devops mistake, and a certain cause of future headaches.

Server-Side Debugging Headaches

One of the main complaints about using CoffeeScript (Yeah – remember CoffeeScript? I know it’s been a few years.) was the difficulty in debugging and deciphering transpiled code you didn’t write. Well, welcome to 2016 and ES6. Here we are doing the same thing, only this time it’s okay (promise!), because it’s just JavaScript, right? Wrong. You will run into all the same headaches and issues of debugging transpiled code, only it will be worse since you are trying to fix a critical issue on the server side that affects all your users everywhere – not just a subset of users with a certain browser and operating system combination.

Import require(), and export ES6 modules

There is no JavaScript engine yet that natively supports ES6 modules – not even the bleeding-edge v8 engine in Chrome and Node.js or Microsoft’s new Chakra engine. If you are using import/export, babel is converting these statements to require and module.exports anyways. You should just go ahead and use require() and module.exports instead in Node.js. I know there are more features that are possible with import/export like tree-shaking (hat tip to Rollup.js), but these features are much more valuable on the client side in reducing bundle sizes than on the server with Node.js. Again – code clarity and ease of debugging and development are more important on the server side than being on the bleeding edge of ES6.

You Can Still Use Most ES6 Features

The important thing to know given my advice about not transpiling your JavaScript for Node.js is that you can still use a bunch of great ES6 features with Node.js without the need to transpile your code. It’s not like your choices are to use the new ES6 hotness or be stuck in 2009. The only real thing you will have to give up is import/export, which isn’t a huge sacrifice given how confusing some of the import/export rules are to newcomers, and the fact that the statements will be converted to require/module.exports anyways.

Step 1 is to add a strict mode declaration at the top of each file:

'use strict';

And then maybe some feature flags, depending on which other ES6 features you require (and even this is temporary, given the new v8 engine already supports both destructuring and default parameters, and so will be enabled by default in Node.js v6.0). I personally use the –harmony_default_parameters and –harmony_destructuring flags:

node --harmony_default_parameters --harmony-destructuring src/server/index.js

Or using nodemon for automatic server reloading:

nodemon --harmony_default_parameters --harmony-destructuring src/server/index.js

Save Transpiling for the Client

The bottom line here is to save transpiling for the client-side where it is still needed for full browser support. You should never add complications or additional steps to run your code where it is not necessary, and the server is one of those places for JavaScript. Keep your server-side development cycles fast, lean, and simple. Your future self will thank you.

Disabled Comments

After switching back to WordPress from a static blog that did not have any comments, I left comments on out of curiosity to see what would happen. Sure enough, within a few days, I already had over 20 spam comments to moderate. I just switched comments back off, and don’t plan on turning them on anytime soon. They are too much of a hassle a police. There are lots of ways to contact me if you need help or clarification on any of my posts.

Switching from Middleman Back to WordPress

After a little over 2 years on Middleman, I decided to move my blog back to WordPress. Middleman – and static blogs in general – are a good idea in general (especially for security and performance), but I found it more difficult to write and contribute to my own blog on a regular, ongoing basis. This was especially noticeable after I started working for NetSuite instead of contracting, because I was no longer doing all of my work on my own personal laptop everyday. Continue reading

DevData: The Data You Need in the Language You Want

Around two months ago, I launched a new website called DevData. It is a website that I wished has existed for many years, and finally just built and launched it myself.

Continue reading

Year of Making Stuff

Well, the rest of the year, anyways. Better late than never, right?

Inspired by Justin Jackson and his Build and Launch podcast, I have decided to commit to launching at least 4 new projects this year. It’s not nearly the quickening pace of Justin’s one product per week on his podcast, but I figure it’s a good starting point.

I already launched SoundingBoard a few days ago, which is a blog, with the intention of culminating in an ebook as well. It’s a bit of a different audience for me since it’s non-technical, but there is definitely a real need for the information I am writing for it.

My second product this year is close to launching, and is in beta with a few friends and (hopefully) future customers right now. That is another blog post for a little while later.

As for the remaining 2 projects, who knows. We’ll see what happens. There’s a ton of work ahead of me now.

SoundingBoard

I just re-launched SoundingBoard as a new blog to help non-technical people learn how to evaluate their app ideas.

During my time running Brightbit (a web developemnt studio), I met with a lot of people about their app ideas. Some were bad and crazy, but most of the ideas I heard were good ideas that just lacked the critical thinking steps necessary to determine basic viability or technical feasability.

The Tip of the Iceberg

I view app ideas like an iceberg. When most people dream up an app idea, they think only about the app itself, and fail to see the mountain of work beneath the idea itself. The app idea won’t necessarily be a bad idea – it may even be a downright good idea – but there are so many other considerations and questions that have to be answered to get a complete picture of the kind of work (time + money) involved in brining your app idea to life.

This is where SoundingBoard comes in. If you know anyone that has a lot of app ideas, but doesn’t know who to talk to, the SoundingBoard blog is for them.

Upcoming Book

In addition to the blog, I am also writing a book. Make sure to subscribe to the blog or fill out the form on the book landing page if you want to be notified when it launches.

Working For The Man

After almost exactly one year being fully on my own after shutting down Brightbit, I have decided to stop doing contract work, and accept a full-time position at NetSuite. I debated a lot about either staying on my own, or getting a full time job, and in the end, the job won out. Continue reading

A Modern PHP OpenX API Client

I released a new OpenX REST API Client that works with the newest OpenX v4 REST API. It uses Guzzle v4.x and the oauth-subscriber plugin. It is available on Packagist, uses the PSR-4 autoloader, and is properly namespaced. It took a bit of effort to put together, so I hope you enjoy using it, and I hope it saves you a lot of time.

Continue reading

Fixing Homebrew on OSX 10.10 Yosemite

If you upgraded to OSX 10.10 Yosemite, and now have a broken homebrew, fear not

  • the homebrew team has already fixed this!

Luckily, the steps to fix it are fairly simple.

First, update homebrew via git:

cd /usr/local/Library git pull origin master

Next, use homebrew to update and clean your installed packages:

brew update brew prune brew doctor

Now you should be all set!

Footnote:

I originally found (and tweeted about) this article when searching for a fix, but ran into more issues after editing the brew.rb file, and eventually came to the solution of updating homebrew itself after seeing that the homebrew team had fixed the issue themselves.

An API is a Competitive Advantage

In this increasingly inter-connected world, APIs are becoming more and more important as time goes on. This is especially true if you have a business that requires integration of some sort, like metrics, notifications, integrated access to other systems (like telephony), payments, etc.

Companies like Stripe, Amazon, and Twilio have embraced the API-first approach, and in many ways embody and epitomize this movement as a whole.

Beyond Just Having An API

Just having an API is the obvious requirement for basic integrations. Going further than that, however, is the thought that your API can actually be a key point of differentiation from your competitors. Using this strategy (creating a robust, easy-to-use API) can be especially effective when you are going up against entrenched competitors, or when you are trying to make something that is traditionally very hard, easy.

Stripe And The Payments Industry

Ask any developer about online payment gateways, and they are likely to mention Stripe. Why? Because it was clear from the start that they really cared about devleopers, and put high priority in their API. Not only just creating an API – because every online payment system has an API – but in creating a very good API that is robust, simple, well-documented, and easy to use.

In contrast, many of Stripe’s competitors are using SOAP APIs or an emulation of the Authorize.net API. The API documentation typically exists only in PDF form, and it’s something that is mailed to you by the sales department. You’re lucky if you can find it on the website. Sales first and developers second is pretty much the exact opposite approach that Stripe took by focusing on developers and integrations first.

Here an example from the Stripe documentation – it’s just a simple cURL call to charge a card, and returns a simple JSON response:

curl https://api.stripe.com/v1/charges \
    -u sk_test_BQokikJOvBiI2HlWgH4olfQ2: \
    -d amount=400 \
    -d currency=usd \
    -d card=tok_14i9vP2eZvKYlo2Cdr4h0oHs \
    -d "description=Charge for test@example.com"

Stripe did several things right here:

  • Provide a simple API with good documentation
  • Provide a fast on-boarding process with no red tape (rare for credit card processors)
  • Support subscription charges with no additional fees (also rare)
  • Target and market to developers
  • Good design and nice, clean merchant interface

Stripe’s success is a combination of the above reasons as well as many other factors, but without a doubt their core product and main competitive advantage is their API. It shows in their overall developer experience, and has played a large role in their success in stealing market share from entrenched competitors like Authorize.net.

Amazon and the Public Cloud

For many, Amazon is synonymous with cloud computing. Many web hosts selling vitrualized servers came and went before Amazon got into the game, but no one besides maybe DigitalOcean has had a similar level of success doing so.

From the start of Amazon Web Services, Amazon made it clear that they were a platform for developers to build on top of, and provided an API from day one. So while many other virtual hosting providers existed, Amazon EC2 was one of the only ones that developers could use to provision whole new servers with automated scripts and zero manual intervention. The availability of APIs to provision servers lead to the creation of businesses built on top of Amazon’s infrastructure, like Heroku – who probably wouldn’t exist without Amazon’s APIs.

Rackspace, a much larger web host and significant competitor, didn’t launch a public API until years after Amazon did, but it was already too late, and they gave up significant market share to Amazon and Google Cloud Engine. Amazon’s API was its killer feature, and key differentiator. And we all know how well that has gone for them.

Twilio And Telecommunications

Twilio is a good example of a company using APIs to make something that is normally really difficult very easy. Now you don’t have to worry about which cell phone network the number you are texting belongs to, what country it is in, etc. Just integrate with the Twilio API, and you know it’s going to work.

For Twilio, their API is their entire business. There is no Twilio without an API, because if Twilio was just a web form that sent a text message to any given number – even if it still smoothed over all the carrier and location differences – it would not acheive the goal of automation, and thus would defeat the purpose.

In the years since Twilio launched, countless companies have relied on it for things like 2-factor authentication and phone number verification via SMS. Twilio can even power your entire phone system through tools like OpenVBX, all with a collection of REST APIs.

The Bottom Line

If you don’t have an open REST API that is easy to use, you will lose market share to a competitor who does. It’s time to start taking your API very seriously. An API is a competitive advantage.

« Older posts

Copyright © 2016 Vance Lucas

Theme by Anders NorenUp ↑