1
title: Node v0.10.0 (Stable)
4
date: 2013-03-11T16:00:00.000Z
5
slug: node-v0-10-0-stable
6
author: Isaac Z. Schlueter
8
I am pleased to announce a new stable version of Node.
10
This branch brings significant improvements to many areas, with a
11
focus on API polish, ease of use, and backwards compatibility.
13
For a very brief overview of the relevant API changes since v0.8,
14
please see [the API changes wiki
15
page](https://github.com/joyent/node/wiki/Api-changes-between-v0.8-and-v0.10).
19
In a [previous post](http://blog.nodejs.org/2012/12/20/streams2/), we
21
"[Streams2](http://blog.nodejs.org/2012/12/20/streams2/)" API changes.
22
If you haven't reviewed the changes, please go read that now (at least
25
The changes to the stream interface have been a long time in the
26
making. Even from the earliest days of Node, we've all sort of known
27
that this whole "data events come right away" and "pause() is
28
advisory" stuff was unnecessarily awful. In v0.10, we finally bit the
29
bullet and committed to making drastic changes in order to make these
32
More importantly, all streams in Node-core are built using the same
33
set of easily-extended base classes, so their behavior is much more
34
consistent, and it's easier than ever to create streaming interfaces
35
in your own userland programs.
37
In fact, the Streams2 API was developed while using it for modules in
38
the npm registry. At the time of this writing, [37 published Node
39
modules](https://npmjs.org/browse/depended/readable-stream) already
41
[readable-stream](https://npmjs.org/package/readable-stream) library
42
as a dependency. The readable-stream npm package allows you to use
43
the new Stream interface in your legacy v0.8 codebase.
45
## Domains and Error Handling
47
The `domain` module has been elevated from "Experimental" to
48
"Unstable" status. It's been given more of a first-class treatment
49
internally, making it easier to handle some of the edge cases that we
50
found using Domains for error handling in v0.8. Specifically, domain
51
error handler no longer relies on `process.on('uncaughtException')`
52
being raised, and the C++ code in Node is domain-aware.
54
If you're not already using Domains to catch errors in your programs,
55
and you've found yourself wishing that you could get better debugging
56
information when errors are thrown (especially in the midst of lots of
57
requests and asynchronous calls), then definitely check it out.
59
## Faster process.nextTick
61
In v0.8 (and before), the `process.nextTick()` function scheduled its
62
callback using a spinner on the event loop. This *usually* caused the
63
callback to be fired before any other I/O. However, it was not
66
As a result, a lot of programs (including some parts of Node's
67
internals) began using `process.nextTick` as a "do later, but before
68
any actual I/O is performed" interface. Since it usually works that
71
However, under load, it's possible for a server to have a lot of I/O
72
scheduled, to the point where the `nextTick` gets preempted for
73
something else. This led to some odd errors and race conditions,
74
which could not be fixed without changing the semantics of nextTick.
76
So, that's what we did. In v0.10, `nextTick` handlers are run right
77
after each call from C++ into JavaScript. That means that, if your
78
JavaScript code calls `process.nextTick`, then the callback will fire
79
as soon as the code runs to completion, but *before* going back to
80
the event loop. The race is over, and all is good.
82
However, there are programs out in the wild that use recursive calls
83
to `process.nextTick` to avoid pre-empting the I/O event loop for
84
long-running jobs. In order to avoid breaking horribly right away,
85
Node will now print a deprecation warning, and ask you to use
86
`setImmediate` for these kinds of tasks instead.
88
## Latency and Idle Garbage Collection
90
One of the toughest things to get right in a garbage collected
91
language is garbage collection. In order to try to avoid excessive
92
memory usage, Node used to try to tell V8 to collect some garbage
93
whenever the event loop was idle.
95
However, knowing exactly when to do this is extremely difficult.
96
There are different degrees of "idleness", and if you get it wrong,
97
you can easily end up spending massive amounts of time collecting
98
garbage when you'd least expect. In practice, disabling the
99
`IdleNotification` call yields better performance without any
100
excessive memory usage, because V8 is pretty good at knowing when it's
101
the best time to run GC.
103
So, in v0.10, we just ripped that feature out. (According to another
104
point of view, we fixed the bug that it was ever there in the first
105
place.) As a result, latency is much more predictable and stable.
106
You won't see a difference in the benchmarks as a result of this, but
107
you'll probably find that your app's response times are more reliable.
110
## Performance and Benchmarks
112
When the Streams2 feature first landed in master, it disrupted a lot
113
of things. We focused first on correctness rather than speed, and as
114
a result of that, we got a correct implementation that was
115
significantly slower.
117
We have a consistent rule in Node, that it cannot be allowed to get
118
slower for our main use cases. It took a lot of work, but over the
119
last few months, we've managed to get v0.10 to an appropriate level of
120
performance, without sacrificing the API goals that we had in mind.
122
Benchmarks are complicated beasts. Until this release, we've gotten
123
by with a pretty ad-hoc approach to running benchmarks. However, as
124
we started actually having to track down regressions, the need for a
125
more comprehensive approach was obvious.
127
Work is underway to figure out the optimum way to get statistically
128
significant benchmark results in an automated way. As it is, we're
129
still seeing significant jitter in some of the data, so take the red
130
and green colors with a grain of salt.
132
The benchmarks below were run on an Apple 13-inch, Late 2011 MacBook
133
Pro with a 2.8 GHz Intel Core i7 processor, 8GB of 1333MHz DDR3 RAM,
134
running OS X Lion 10.7.5 (11G63b). The numbers are slightly different
135
on Linux and SmartOS, but the conclusions are the same. The [raw data
136
is available](http://nodejs.org/benchmarks-v0.10-vs-v0.8/), as well.
140
Node is for websites, and websites run over HTTP, so this is the one
141
that people usually care the most about:
143
<pre style="background-color:#333;color:#eee;font-size:12px">
144
http/cluster.js type=bytes length=4: <span style="background-color:#0f0;color:#000">v0.10: 16843</span> v0.8: 16202 ................. <span style="background-color:#0f0;color:#000">3.96%</span>
145
http/cluster.js type=bytes length=1024: <span style="background-color:#0f0;color:#000">v0.10: 15505</span> v0.8: 15065 .............. <span style="background-color:#0f0;color:#000">2.92%</span>
146
http/cluster.js type=bytes length=102400: v0.10: 1555.2 <span style="background-color:#f00;color:#fff">v0.8: 1566.3</span> ......... <span style="background-color:#f00;color:#fff">-0.71%</span>
147
http/cluster.js type=buffer length=4: <span style="background-color:#0f0;color:#000">v0.10: 15308</span> v0.8: 14763 ................ <span style="background-color:#0f0;color:#000">3.69%</span>
148
http/cluster.js type=buffer length=1024: <span style="background-color:#0f0;color:#000">v0.10: 15039</span> v0.8: 14830 ............. <span style="background-color:#0f0;color:#000">1.41%</span>
149
http/cluster.js type=buffer length=102400: <span style="background-color:#0f0;color:#000">v0.10: 7584.6</span> v0.8: 7433.6 ......... <span style="background-color:#0f0;color:#000">2.03%</span>
150
http/simple.js type=bytes length=4: <span style="background-color:#0f0;color:#000">v0.10: 12343</span> v0.8: 11761 .................. <span style="background-color:#0f0;color:#000">4.95%</span>
151
http/simple.js type=bytes length=1024: <span style="background-color:#0f0;color:#000">v0.10: 11051</span> v0.8: 10287 ............... <span style="background-color:#0f0;color:#000">7.43%</span>
152
http/simple.js type=bytes length=102400: v0.10: 853.19 <span style="background-color:#f00;color:#fff">v0.8: 892.75</span> .......... <span style="background-color:#f00;color:#fff">-4.43%</span>
153
http/simple.js type=buffer length=4: <span style="background-color:#0f0;color:#000">v0.10: 11316</span> v0.8: 10728 ................. <span style="background-color:#0f0;color:#000">5.48%</span>
154
http/simple.js type=buffer length=1024: <span style="background-color:#0f0;color:#000">v0.10: 11199</span> v0.8: 10429 .............. <span style="background-color:#0f0;color:#000">7.38%</span>
155
http/simple.js type=buffer length=102400: <span style="background-color:#0f0;color:#000">v0.10: 4942.1</span> v0.8: 4822.9 .......... <span style="background-color:#0f0;color:#000">2.47%</span>
158
What we see here is that, overall, HTTP is faster. It's just slightly
159
slower (1-5%) when sending extremely large string messages (ie
160
`type=bytes` rather than `type=buffer`). But otherwise, things are
161
about the same, or slightly faster.
165
The fs.ReadStream throughput is massively improved, and less affected
166
by the chunk size argument:
168
<pre style="background-color:#333;color:#eee;font-size:12px">
169
fs/read-stream buf size=1024: <span style="background-color:#0f0;color:#000">v0.10</span>: 1106.6 v0.8: 60.597 ................... <span style="background-color:#0f0;color:#000">1726.12%</span>
170
fs/read-stream buf size=4096: <span style="background-color:#0f0;color:#000">v0.10</span>: 1107.9 v0.8: 235.51 .................... <span style="background-color:#0f0;color:#000">370.44%</span>
171
fs/read-stream buf size=65535: <span style="background-color:#0f0;color:#000">v0.10</span>: 1108.2 v0.8: 966.84 .................... <span style="background-color:#0f0;color:#000">14.62%</span>
172
fs/read-stream buf size=1048576: <span style="background-color:#0f0;color:#000">v0.10</span>: 1103.3 v0.8: 959.66 .................. <span style="background-color:#0f0;color:#000">14.97%</span>
173
fs/read-stream asc size=1024: <span style="background-color:#0f0;color:#000">v0.10</span>: 1081.5 v0.8: 62.218 ................... <span style="background-color:#0f0;color:#000">1638.21%</span>
174
fs/read-stream asc size=4096: <span style="background-color:#0f0;color:#000">v0.10</span>: 1082.3 v0.8: 174.78 .................... <span style="background-color:#0f0;color:#000">519.21%</span>
175
fs/read-stream asc size=65535: <span style="background-color:#0f0;color:#000">v0.10</span>: 1083.9 v0.8: 627.91 .................... <span style="background-color:#0f0;color:#000">72.62%</span>
176
fs/read-stream asc size=1048576: <span style="background-color:#0f0;color:#000">v0.10</span>: 1083.2 v0.8: 627.49 .................. <span style="background-color:#0f0;color:#000">72.62%</span>
177
fs/read-stream utf size=1024: <span style="background-color:#0f0;color:#000">v0.10</span>: 46.553 v0.8: 16.944 .................... <span style="background-color:#0f0;color:#000">174.74%</span>
178
fs/read-stream utf size=4096: <span style="background-color:#0f0;color:#000">v0.10</span>: 46.585 v0.8: 32.933 ..................... <span style="background-color:#0f0;color:#000">41.45%</span>
179
fs/read-stream utf size=65535: <span style="background-color:#0f0;color:#000">v0.10</span>: 46.57 v0.8: 45.832 ...................... <span style="background-color:#0f0;color:#000">1.61%</span>
180
fs/read-stream utf size=1048576: <span style="background-color:#0f0;color:#000">v0.10</span>: 46.576 v0.8: 45.884 ................... <span style="background-color:#0f0;color:#000">1.51%</span>
183
The fs.WriteStream throughput increases considerably, for most
184
workloads. As the size of the chunk goes up, the speed is limited by
185
the underlying system and the cost of string conversion, so v0.8 and
186
v0.10 converge. But for smaller chunk sizes (like you'd be more
187
likely to see in real applications), v0.10 is a significant
190
<pre style="background-color:#333;color:#eee;font-size:12px">
191
fs/write-stream buf size=2: <span style="background-color:#0f0;color:#000">v0.10</span>: 0.12434 v0.8: 0.10097 ..................... <span style="background-color:#0f0;color:#000">23.15%</span>
192
fs/write-stream buf size=1024: <span style="background-color:#0f0;color:#000">v0.10</span>: 59.926 v0.8: 49.822 .................... <span style="background-color:#0f0;color:#000">20.28%</span>
193
fs/write-stream buf size=65535: <span style="background-color:#0f0;color:#000">v0.10</span>: 180.41 v0.8: 179.26 .................... <span style="background-color:#0f0;color:#000">0.64%</span>
194
fs/write-stream buf size=1048576: <span style="background-color:#0f0;color:#000">v0.10</span>: 181.49 v0.8: 176.73 .................. <span style="background-color:#0f0;color:#000">2.70%</span>
195
fs/write-stream asc size=2: <span style="background-color:#0f0;color:#000">v0.10</span>: 0.11133 v0.8: 0.08123 ..................... <span style="background-color:#0f0;color:#000">37.06%</span>
196
fs/write-stream asc size=1024: <span style="background-color:#0f0;color:#000">v0.10</span>: 53.023 v0.8: 36.708 .................... <span style="background-color:#0f0;color:#000">44.45%</span>
197
fs/write-stream asc size=65535: <span style="background-color:#0f0;color:#000">v0.10</span>: 178.54 v0.8: 174.36 .................... <span style="background-color:#0f0;color:#000">2.39%</span>
198
fs/write-stream asc size=1048576: <span style="background-color:#0f0;color:#000">v0.10</span>: 185.27 v0.8: 183.65 .................. <span style="background-color:#0f0;color:#000">0.88%</span>
199
fs/write-stream utf size=2: <span style="background-color:#0f0;color:#000">v0.10</span>: 0.11165 v0.8: 0.080079 .................... <span style="background-color:#0f0;color:#000">39.43%</span>
200
fs/write-stream utf size=1024: <span style="background-color:#0f0;color:#000">v0.10</span>: 45.166 v0.8: 32.636 .................... <span style="background-color:#0f0;color:#000">38.39%</span>
201
fs/write-stream utf size=65535: <span style="background-color:#0f0;color:#000">v0.10</span>: 176.1 v0.8: 175.34 ..................... <span style="background-color:#0f0;color:#000">0.43%</span>
202
fs/write-stream utf size=1048576: v0.10: 182.3 <span style="background-color:#f00;color:#fff">v0.8</span>: 182.82 .................. <span style="background-color:#f00;color:#fff">-0.28%</span>
207
We switched to a newer version of OpenSSL, and the CryptoStream
208
implementation was significantly changed to support the Stream2
211
The throughput of TLS connections is massively improved:
213
<pre style="background-color:#333;color:#eee;font-size:12px">
214
tls/throughput.js dur=5 type=buf size=2: <span style="background-color:#0f0;color:#000">v0.10: 0.90836</span> v0.8: 0.32381 ....... <span style="background-color:#0f0;color:#000">180.52%</span>
215
tls/throughput.js dur=5 type=buf size=1024: <span style="background-color:#0f0;color:#000">v0.10: 222.84</span> v0.8: 116.75 ....... <span style="background-color:#0f0;color:#000">90.87%</span>
216
tls/throughput.js dur=5 type=buf size=1048576: <span style="background-color:#0f0;color:#000">v0.10: 403.17</span> v0.8: 360.42 .... <span style="background-color:#0f0;color:#000">11.86%</span>
217
tls/throughput.js dur=5 type=asc size=2: <span style="background-color:#0f0;color:#000">v0.10: 0.78323</span> v0.8: 0.28761 ....... <span style="background-color:#0f0;color:#000">172.32%</span>
218
tls/throughput.js dur=5 type=asc size=1024: <span style="background-color:#0f0;color:#000">v0.10: 199.7</span> v0.8: 102.46 ........ <span style="background-color:#0f0;color:#000">94.91%</span>
219
tls/throughput.js dur=5 type=asc size=1048576: <span style="background-color:#0f0;color:#000">v0.10: 375.85</span> v0.8: 317.81 .... <span style="background-color:#0f0;color:#000">18.26%</span>
220
tls/throughput.js dur=5 type=utf size=2: <span style="background-color:#0f0;color:#000">v0.10: 0.78503</span> v0.8: 0.28834 ....... <span style="background-color:#0f0;color:#000">172.26%</span>
221
tls/throughput.js dur=5 type=utf size=1024: <span style="background-color:#0f0;color:#000">v0.10: 182.43</span> v0.8: 100.3 ........ <span style="background-color:#0f0;color:#000">81.88%</span>
222
tls/throughput.js dur=5 type=utf size=1048576: <span style="background-color:#0f0;color:#000">v0.10: 333.05</span> v0.8: 301.57 .... <span style="background-color:#0f0;color:#000">10.44%</span>
225
However, the speed at which we can make connections is somewhat
228
<pre style="background-color:#333;color:#eee;font-size:12px">
229
tls/tls-connect.js concurrency=1 dur=5: v0.10: 433.05 <span style="background-color:#f00;color:#fff">v0.8: 560.43</span> .......... <span style="background-color:#f00;color:#fff">-22.73%</span>
230
tls/tls-connect.js concurrency=10 dur=5: v0.10: 438.38 <span style="background-color:#f00;color:#fff">v0.8: 577.93</span> ......... <span style="background-color:#f00;color:#fff">-24.15%</span>
233
At this point, it seems like the connection speed is related to the
234
new version of OpenSSL, but we'll be tracking that further.
236
TLS still has more room for improvement, but this throughput increase
241
The net throughput tests tell an interesting story. When sending
242
ascii messages, they're much faster.
244
<pre style="background-color:#333;color:#eee;font-size:12px">
245
net/net-c2s.js len=102400 type=asc dur=5: <span style="background-color:#0f0;color:#000">v0.10: 3.6551</span> v0.8: 2.0478 ......... <span style="background-color:#0f0;color:#000">78.49%</span>
246
net/net-c2s.js len=16777216 type=asc dur=5: <span style="background-color:#0f0;color:#000">v0.10: 3.2428</span> v0.8: 2.0503 ....... <span style="background-color:#0f0;color:#000">58.16%</span>
247
net/net-pipe.js len=102400 type=asc dur=5: <span style="background-color:#0f0;color:#000">v0.10: 4.4638</span> v0.8: 3.0798 ........ <span style="background-color:#0f0;color:#000">44.94%</span>
248
net/net-pipe.js len=16777216 type=asc dur=5: <span style="background-color:#0f0;color:#000">v0.10: 3.9449</span> v0.8: 2.8906 ...... <span style="background-color:#0f0;color:#000">36.48%</span>
249
net/net-s2c.js len=102400 type=asc dur=5: <span style="background-color:#0f0;color:#000">v0.10: 3.6306</span> v0.8: 2.0415 ......... <span style="background-color:#0f0;color:#000">77.84%</span>
250
net/net-s2c.js len=16777216 type=asc dur=5: <span style="background-color:#0f0;color:#000">v0.10: 3.2271</span> v0.8: 2.0636 ....... <span style="background-color:#0f0;color:#000">56.38%</span>
253
When sending Buffer messages, they're just slightly slower. (This
254
difference is less than the typical variability of the test, but they
255
were run 20 times and outliers were factored out for this post.)
257
<pre style="background-color:#333;color:#eee;font-size:12px">
258
net/net-c2s.js len=102400 type=buf dur=5: v0.10: 5.5597 <span style="background-color:#f00;color:#fff">v0.8: 5.6967</span> ......... <span style="background-color:#f00;color:#fff">-2.40%</span>
259
net/net-c2s.js len=16777216 type=buf dur=5: v0.10: 6.1843 <span style="background-color:#f00;color:#fff">v0.8: 6.4595</span> ....... <span style="background-color:#f00;color:#fff">-4.26%</span>
260
net/net-pipe.js len=102400 type=buf dur=5: v0.10: 5.6898 <span style="background-color:#f00;color:#fff">v0.8: 5.986</span> ......... <span style="background-color:#f00;color:#fff">-4.95%</span>
261
net/net-pipe.js len=16777216 type=buf dur=5: <span style="background-color:#0f0;color:#000">v0.10: 5.9643</span> v0.8: 5.9251 ....... <span style="background-color:#0f0;color:#000">0.66%</span>
262
net/net-s2c.js len=102400 type=buf dur=5: v0.10: 5.473 <span style="background-color:#f00;color:#fff">v0.8: 5.6492</span> .......... <span style="background-color:#f00;color:#fff">-3.12%</span>
263
net/net-s2c.js len=16777216 type=buf dur=5: v0.10: 6.1986 <span style="background-color:#f00;color:#fff">v0.8: 6.3236</span> ....... <span style="background-color:#f00;color:#fff">-1.98%</span>
266
When sending utf-8 messages, they're a bit slower than that:
268
<pre style="background-color:#333;color:#eee;font-size:12px">
269
net/net-c2s.js len=102400 type=utf dur=5: v0.10: 2.2671 <span style="background-color:#f00;color:#fff">v0.8: 2.4606</span> ......... <span style="background-color:#f00;color:#fff">-7.87%</span>
270
net/net-c2s.js len=16777216 type=utf dur=5: v0.10: 1.7434 <span style="background-color:#f00;color:#fff">v0.8: 1.8771</span> ....... <span style="background-color:#f00;color:#fff">-7.12%</span>
271
net/net-pipe.js len=102400 type=utf dur=5: v0.10: 3.1679 <span style="background-color:#f00;color:#fff">v0.8: 3.5401</span> ....... <span style="background-color:#f00;color:#fff">-10.51%</span>
272
net/net-pipe.js len=16777216 type=utf dur=5: v0.10: 2.5615 <span style="background-color:#f00;color:#fff">v0.8: 2.7002</span> ...... <span style="background-color:#f00;color:#fff">-5.14%</span>
273
net/net-s2c.js len=102400 type=utf dur=5: v0.10: 2.2495 <span style="background-color:#f00;color:#fff">v0.8: 2.4578</span> ......... <span style="background-color:#f00;color:#fff">-8.48%</span>
274
net/net-s2c.js len=16777216 type=utf dur=5: v0.10: 1.7733 <span style="background-color:#f00;color:#fff">v0.8: 1.8975</span> ....... <span style="background-color:#f00;color:#fff">-6.55%</span>
277
You might suspect that this is a result of the new Streams
278
implementation. However, running the same benchmarks without using
279
any of the code in Node's `lib/` folder, just calling into the C++
280
bindings directly, yields consistently similar results.
282
This slight regression comes along with significant improvements in
283
everything that sits on *top* of TCP (that is, TLS and HTTP).
285
Keep an eye out for more work in this area. Fast is never fast
289
## Continuous Integration
291
To support a higher degree of stability, and hopefully catch issues
292
sooner, we have a Jenkins instance running every commit through the
293
test suite, on each operating system we support. You can watch the
294
action at [the Node Jenkins web portal](http://jenkins.nodejs.org/).
296
Coming soon, we'll have automatically generated nightly builds every
297
day, and eventually, the entire release process will be automated.
299
While we're pretty rigorous about running tests and benchmarks, it's
300
easy for things to slip by, and our ad-hoc methods are not cutting it
301
any longer. This promises a much lower incidence of the sort of
302
regressions that delayed the release of v0.10 for several months.
307
A year ago, we said that the innovation in the Node universe would be
308
happening in userland modules. Now, we've finally taken that to its
309
logical conclusion, and moved our iteration on **core** modules into
310
userland as well. Things like `readable-stream` and `tlsnappy` allow
311
us to get much more user-testing, experimentation, and contributions
314
The userland module can live on as a compatibility layer so that
315
libraries can use the new features, even if they need to support older
316
versions of Node. This is a remarkably effective way to do node-core
317
development. Future developments will continue to be iterated in
320
## Growing Up <a name="enterprise"></a>
322
The question comes up pretty often whether Node is "ready for prime
323
time" yet. I usually answer that it depends on your requirements for
324
"prime time", but Node has been powering some high profile sites, and
325
the options for "real" companies using Node for The Business are
328
It would be out of scope to try to provide an exhaustive list of all
329
the companies using Node, and all of the options for support and
330
training. However, here are a few resources that are quickly
331
expanding to fill the "Enterprise Node" space.
333
For those looking for commercial support,
334
[StrongLoop](http://strongloop.com/) (Ben Noordhuis & Bert Belder's
335
company) has released a distribution containing node v0.10 that they
336
will support on Windows, Mac, Red Hat/Fedora, Debian/Ubuntu and
337
multiple cloud platforms. You can [download their Node distribution
338
here](http://strongloop.com/products#downloads).
340
[The Node Firm](http://thenodefirm.com) is a worldwide network of key
341
Node contributors and community members that help organizations
342
succeed with Node. Through corporate training, consulting,
343
architectural guidance, and [ongoing consulting
344
subscriptions](http://thenodefirm.com/nodejs-consulting-subscriptions),
345
they have helped Skype, Qualcomm, and others quickly and effectively
348
Node would not be what it is without [npm](https://npmjs.org/), and
349
npm would not be what it is without the registry of published modules.
350
However, relying on the public registry is problematic for many
351
enterprise use-cases. [Iris npm](https://www.irisnpm.com/) is a fully
352
managed private npm registry, from [Iris
353
Couch](http://www.iriscouch.com), the team that runs the public npm
354
registry in production.
356
[Joyent](http://joyent.com), the company you probably know as the
357
custodian of the Node project, provides high performance cloud
358
infrastructure specializing in real-time web and mobile applications.
359
Joyent uses Node extensively throughout their stack, and provides
360
impressive [post-mortem debugging and real-time performance analysis
361
tools](http://dtrace.org/blogs/dap/2012/05/31/debugging-node-js-in-production-fluent-slides/)
362
for Node.js applications. They are also my employer, so I'd probably
363
have to get a "real" job if they weren't sponsoring Node :)
367
The focus of Node v0.12 will be to make HTTP better. Node's current
368
HTTP implementation is pretty good, and clearly sufficient to do a lot
369
of interesting things with. However:
371
1. The codebase is a mess. We share a lot of code between the Client
372
and Server implementations, but do so in a way that makes it
373
unnecessarily painful to read the code or fix bugs. It will be
374
split up so that client and server are clearly separated, and have
376
2. The socket pooling behavior is confusing and weird. We will be
377
adding configurable socket pooling as a standalone utility. This
378
will allow us to implement KeepAlive behavior in a more reasonable
379
manner, as well as providing something that you can use in your own
382
There is some experimentation going on in the
383
[tlsnappy](https://github.com/indutny/tlsnappy) module, which may make
384
its way back into the core TLS implementation and speed things up
389
After 0.12, the next major stable release will be 1.0. At that point,
390
very little will change in terms of the day-to-day operation of the
391
project, but it will mark a significant milestone in terms of our
392
stability and willingness to add new features. However, we've already
393
gotten strict about maintaining backwards compatibility, so this won't
394
really be so much of a shift.
396
New versions will still come out, especially to pull in new versions
397
of our dependencies, and bugs will continue to be fixed. There's been
398
talk of pinning our release cycles to V8, and automating the release
399
process in some interesting ways.
401
The goal of Node has always been to eventually be "finished" with the
402
core program. Of course, that's a rather lofty goal, perhaps even
403
impossible. But as we take Node to more places, and use it in more
404
ways, we're getting closer to the day when the relevant innovation
405
happens outside of the core Node program.
407
Stability in the platform enables growth on top of it.
409
And now, the traditional release notes:
411
## 2013.03.11, Version 0.10.0 (Stable)
413
* npm: Upgrade to 1.2.14
415
* core: Append filename properly in dlopen on windows (isaacs)
417
* zlib: Manage flush flags appropriately (isaacs)
419
* domains: Handle errors thrown in nested error handlers (isaacs)
421
* buffer: Strip high bits when converting to ascii (Ben Noordhuis)
423
* win/msi: Enable modify and repair (Bert Belder)
425
* win/msi: Add feature selection for various Node parts (Bert Belder)
427
* win/msi: use consistent registry key paths (Bert Belder)
429
* child_process: support sending dgram socket (Andreas Madsen)
431
* fs: Raise EISDIR on Windows when calling fs.read/write on a dir (isaacs)
433
* unix: fix strict aliasing warnings, macro-ify functions (Ben Noordhuis)
435
* unix: honor UV_THREADPOOL_SIZE environment var (Ben Noordhuis)
437
* win/tty: fix typo in color attributes enumeration (Bert Belder)
439
* win/tty: don't touch insert mode or quick edit mode (Bert Belder)
442
Source Code: http://nodejs.org/dist/v0.10.0/node-v0.10.0.tar.gz
444
Macintosh Installer (Universal): http://nodejs.org/dist/v0.10.0/node-v0.10.0.pkg
446
Windows Installer: http://nodejs.org/dist/v0.10.0/node-v0.10.0-x86.msi
448
Windows x64 Installer: http://nodejs.org/dist/v0.10.0/x64/node-v0.10.0-x64.msi
450
Windows x64 Files: http://nodejs.org/dist/v0.10.0/x64/
452
Linux 32-bit Binary: http://nodejs.org/dist/v0.10.0/node-v0.10.0-linux-x86.tar.gz
454
Linux 64-bit Binary: http://nodejs.org/dist/v0.10.0/node-v0.10.0-linux-x64.tar.gz
456
Solaris 32-bit Binary: http://nodejs.org/dist/v0.10.0/node-v0.10.0-sunos-x86.tar.gz
458
Solaris 64-bit Binary: http://nodejs.org/dist/v0.10.0/node-v0.10.0-sunos-x64.tar.gz
460
Other release files: http://nodejs.org/dist/v0.10.0/
462
Website: http://nodejs.org/docs/v0.10.0/
464
Documentation: http://nodejs.org/docs/v0.10.0/api/
469
b9e9bca99cdb5563cad3d3f04baa262e317b827c node-v0.10.0-darwin-x64.tar.gz
470
0227c9bc3df5b62267b9d4e3b0a92b3a70732229 node-v0.10.0-darwin-x86.tar.gz
471
9f5f7350d6f889ea8e794516ecfea651e8e53d24 node-v0.10.0-linux-x64.tar.gz
472
cc5f1cd6a2f2530bc400e761144bbaf8fcb66cc4 node-v0.10.0-linux-x86.tar.gz
473
42c14b7eab398976b1ac0a8e6e96989059616af5 node-v0.10.0-sunos-x64.tar.gz
474
ddcadbac66d1acea48aa6c5462d0a0d7308ea823 node-v0.10.0-sunos-x86.tar.gz
475
70eacf2cca7abec79fac4ca502e8d99590a2108a node-v0.10.0-x86.msi
476
c48c269b9b0f0a95e6e9234d4597d1c8a1c45c5a node-v0.10.0.pkg
477
7321266347dc1c47ed2186e7d61752795ce8a0ef node-v0.10.0.tar.gz
478
f8c6f55469551243ea461f023cc57c024f57fef2 node.exe
479
253ae79e411fcfddcf28861559ececb4b335db64 node.exp
480
acb8febb5ea714c065f201ced5423b0838fdf1b6 node.lib
481
0fdad1400036dd26d720070f783d3beeb3bb9c0a node.pdb
482
abcaf8ef606655a05e73ee5d06715ffd022aad22 x64/node-v0.10.0-x64.msi
483
e5d0c235629b26430b6e07c07ee2c7dcda82f35e x64/node.exe
484
43b3fb3a6aaf6a04f578ee607a9455c1e23acf08 x64/node.exp
485
87dd6eb6c8127a1af0dcca639961441fc303d75a x64/node.lib
486
50aca715777fa42b854e6cfc56b6199a54aabd3c x64/node.pdb