Planet Jabber

July 18, 2014

Fanout Blog

Mongrel2 HTTP server now in Debian/Ubuntu

Mongrel2 is a fast and simple HTTP & WebSocket server that communicates to backend workers via ZeroMQ. It does one thing and does it very well, making it an ideal part of a componentized architecture. The code is event-driven, allowing it to support thousands of concurrent connections and also asynchronous behaviors. These properties are especially important to realtime applications.

Fanout has been one of the most active contributors to the Mongrel2 project over the past year, adding features such as TLS SNI and improved streaming capability. We've also been working on making the server easier for people to get started with. And with that, we are proud to announce official packages for Debian and Ubuntu!

...

by justin at July 18, 2014 19:58

July 17, 2014

ProcessOne

GigaOM releases report on brand engagement through in-app communication

Mobile apps now allow businesses to be more responsive to users than ever before, but this interaction has created new customer expectations. While a mobile app must fulfill its core function, it must also continue a dialog with its users to remain relevant. Doing so requires ongoing, intelligent, targeted outreach to customers and an extension of customer-service strategy into the app itself.

Brands and businesses that develop mobile applications must be aware of the demands and limits of an increasingly sophisticated mobile audience, build a communications strategy that spans the appropriate communications channels, and play to the strength of each channel.

shutterstock_71646877

In-App communication gathers as a whole three mobile oriented channels:

  • Native Push Notification services.
  • In-app notifications.
  • In-app chat.

Together, they form the three main components of brand realtime user relationship management on mobile. Today, those three approaches are your best bet to establish a strong communication channel that provides a real value to your users.

The field is quite new and we are glad to be able to share with GigaOM the first research on brand engagement through in-app communication.

The document gives examples, insights and best practices to tighten the links with your mobile application user base and provide them the best value at the right time.

We are pleased to make this report available for free to people interested in Boxcar Push Notifications Service. Join our mailing list focused on leveraging in-app communication for brands to receive report download link »

(Do not worry, it’s low traffic, no spam, we only care about the value we give you).

Report is also available to GigaOM subscribers.

by Mickaël Rémond at July 17, 2014 08:22

July 07, 2014

Isode

New “Military forms using XMPP” whitepaper and access to our Military forms Demo


Forms are important for military operations, and there is often a need to handle forms quickly and share with a large number of users, such as Medical Evacuation (MEDEVAC) alerts.

XMPP based open standard instant messaging is widely used by military organizations and is a sensible framework for sharing forms. Our new whitepaper [Military Forms using XMPP], published on the Isode website today, looks at the requirements for military forms and how the XEP-0346 “Forms Display and Publishing”(FDP) can be used to provide real time military forms. It looks at how capabilities provided by M-Link support military forms using FDP, and how gateways can enable integration with other services. FDP is supported in the most recent R16.2 release along with FDP Management in M-Link Console.

We have created a military forms demonstration web site using FDP and Isode’s demonstration Web FDP client. To use this demonstration you will require information on the demonstration accounts. Please contact pre-sales@isode.com for access to this information.

If you wish to set up an FDP system, this is explained in our FDP Evaluation Guide which will show you how to set up a basic XMPP Form Display and Publishing (FDP) configuration using Isode’s M-Link server, Isode’s Demonstration web client and Isode’s FDP Demonstration Desktop Client.

by Hannah Gibbs at July 07, 2014 14:11

July 05, 2014

Peter Saint-Andre

Thoreau and the Seasons of Man

This evening I finished reading Thoreau's first book, A Week on the Concord and Merrimack Rivers. Toward the very end, he writes as follows:

July 05, 2014 00:00

July 01, 2014

Fanout Blog

You might not need a WebSocket

Before I begin, I want to say that WebSockets are great. I've even implemented RFC 6455 myself in Zurl and Pushpin, which are used by the Fanout.io service. Fanout.io also supports WebSockets via its XMPP-FTW interface using Primus.

However, after spending quite some time working on large distributed applications and gaining a greater appreciation of REST and messaging patterns, I feel that much of what typical web applications want to accomplish with WebSockets (or with socket-like abstractions) is perhaps better solved by other means.

...

by justin at July 01, 2014 01:32

June 30, 2014

ProcessOne

Cardinality Estimation

In server world, we always need to maintain some metrics; We need to measure to improve. A very common one being “unique active user” per unit of time. While this is really easy to describe, it’s complex when it comes to implementation.

Naive implementation logs all events (let say, user connection), either on memory or disk, and count number of unique entries in the log on a given time frame by removing duplicates.
Well, this is really consuming and can not scale on real server handling millions of user. Better take the probabilistic estimation way… and here comes a very impressive algorithm.

Back in 2007, a team at INRIA published a paper about an efficient algorithm for estimating the number of distinct elements, known as the cardinality, of large data ensembles. This paper by Philippe Flajolet, Éric Fusy, Olivier Gandouet and Frédéric Meunier can be found here.
There are many blogposts about it already, but this paper is worth reading to make your day !

The HyperLogLog algorithm is just genius thanks to uncommon approach that make it require only few Kilobytes of data to give an accurate cardinality estimation on large sets. Basically, the idea is to rely on probability of distribution of user id numeric representation (hashes).

The HyperLogLog algorithm works for maximal cardinalities in the range [0..109] while handling a number of registers (m) in the range [24..216].
Of course, using less registers reduce computing time and memory use, at cost of cardinality estimation precision.

So, what is best value of m (number of registers) for my need ?
Well, that all depends ! Of course, we will run it on huge production services, so it must be fast. It also must be accurate in most cases.

Let’s assess errors margin and approximations

From the original paper, we have an estimation of the standard error of the algorithm:

Let σ ≈ 1.04/√m represent the standard error; the estimates provided by HYPERLOGLOG
are expected to be within σ, 2σ, 3σ of the exact count in respectively 65%, 95%, 99%
of all the cases.

bits m σ 2σ 3σ
10 1024 ±3.25% ±6.50% ±9.75%
11 2048 ±2.30% ±4.60% ±6.90%
12 4096 ±1.62% ±3.26% ±4.89%
13 8192 ±1.15% ±2.30% ±3.45%
14 16384 ±0.81% ±1.62% ±2.43%
15 32768 ±0.57% ±1.14% ±1.71%
16 65536 ±0.40% ±0.81% ±1.23%

Let say one need cardinality approximation ±~3%, with a trust level of 95%.
One can get cardinality ±3.26% using only 4096 registers.
Now if i want error <2% with a trust level of 99%, I must use 215 or 216 registers.

Now, what about execution runtime and memory consumption ?

Let’s play with the erlang implementation available on github thanks to Shyun Yeoh (vaxelfel).

First, let’s check how much Erlang VM memory is needed to store one hyperloglog record, given number of registers m

m memory in bytes
2048 2845
4096 6173
8192 13341
16384 28701
32768 62469
65536 131101

The memory consumption is approximative, and depends on default heap size and other low level memory parameters. But it gives the big picture anyway.

At last, let’s try to update an HyperLogLog record one million time with unique values, and report time and error margin:

m error ms
2048 +3.41% 2178
4096 +2.31% 2414
8192 +0.71% 6343
16384 -0.49% 11100
32768 +0.16% 25667
65536 +0.10% 9584

Without surprise, the consumed time in HyperLogLog update is almost proportional to data structure size in memory, at one exception.
When using m=216, there is an approximate 5x speedup on this implementation. I guess this is related to some binary matching optimization on Erlang VM but I could not get measures clearly stating this.

Parallelization

A production server never run on its own. The magic with HyperLogLog algorithm is that merging data from different servers is possible as long at they run with the same hash function and the same number of registers.
The resulting estimate of merged data gives cardinality for whole cluster. Bravo !

Conclusion

For rapid cardinality estimation using Erlang implementation, there is two main use of HyperLogLog:

  • For fast operation and minor resources consumption, use of 212 registers.
  • For precise estimation, if memory is a problem, use 214 registers, if CPU is a problem use 216 registers.

by Christophe Romain at June 30, 2014 08:47

June 27, 2014

ProcessOne

Google I/O: A couple of days with an Android Watch

I was at Google I/O conference during the past week. I will not yet comment on the overall tone of the conference, as I need to give it more thought.

However, I can comment on the Smartwatch Google gave to Google I/O attendees. I picked LG Watch and gave a try over two days on Android Wear. I picked this one over the other option, Samsung Gear Live, because I was told it would not be very compliant with a non-Samsung device (I use a Nexus 5).

The software is somewhat nice. This is basically a quick access to Google Now: simply making it more accessible may increase its usefulness. Integration with an Android phone is ok. The watch is especially nice for notifications and I was thrilled seeing BBC Sport World Cup goal alerts on my wrist. Application do not have to make any special change to display their notifications on the watch.

However, there is a big catch. Wearing the device hurts. Wearing the LG strap is really painful for my wrist, and various attempt at adjusting it did not help. After wearing it for a while, all I want is removing it to ease the pain. That’s what I did now. I just got back to using my phone as before, by watching its screen when I need to.
I (almost) gave up after a couple of days.

If Google (Or Apple) expects to be successful in wearable, they need to focus on a perfect, flawless user experience. The device itself should be pleasure and the added value must be obvious.

Users expectation are going to be very high. From watches to glasses, in wearable computing, “good enough” is simply not enough.
… And it looks that I am not the only one to think so. You can read excellent Ben Thompson article for more: Android where?

LG watch

by Mickaël Rémond at June 27, 2014 22:53

June 16, 2014

Isode

Isode support to Boeing and NCI Agency at Unified Vision 2014


Over the last couple of years we’ve been conducting both ground and flight trials with a number of military aircraft operators to look at addressing the problems of text chat over constrained links (high-latency, unreliable connection, low-bandwidth).

Text chat has become a vital capability for the modern warfighter but most modern text chat deployments have significant problems, both architectural and functional, in the constrained link environment.

Addressing these problems has been a high priority for our development team and we believe that our M-Link XMPP server product now leads the field in this environment.

We continue to participate in trials whenever we’re given the opportunity, which is why we were very happy to support Boeing and NATO’s NCI Agency in the recent Unified Vision 2014 exercise, the largest ever test of NATO’s intelligence, surveillance and reconnaissance (ISR) capabilities.

M-Link capabilities, including Federated Multi-User Chat and submission of Tactical Reports (TACREPS) using dynamic chat forms, were extensively tested over a 10 day period. We’re very happy with the feedback and results we got from the tests, which will enable us to make even more improvements to M-Link’s performance.

The results from Unified Vision will be used as the baseline for implementation of a Joint ISR Initial Operational Capability, in 2016, for the NATO Response Force.

by Will Sheward at June 16, 2014 14:49

June 14, 2014

Peter Saint-Andre

Cultivating Your Higher Ground

I've been reading Thoreau's letters to Harrison Blake, which are a veritable mine of philosophical insights. For the purposes of writing Walking With Thoreau, I'm especially interested so far in a fascinating vision he draws of cultivating the spiritual reaches of life (letter of May 28th, 1850):

June 14, 2014 00:00

June 11, 2014

ProcessOne

A week at Apple WWDC – early thoughts

I have spent the past week in San Francisco at Apple WWDC (Worldwide Developers Conference). It was the richest and more energetic WWDC I have ever participated in. Apple announced a lot of new features that are going to benefit our XMPP and push platforms for iOS and our mobile software.

But, as I still ponder the overall implications of the conference, I wanted to share some of my initial thoughts and debunk two misunderstandings.

The general media failed to understand how major this conference was

My first comments are about the gap in understanding by the general media regarding this conference. In France, for example, Le Monde, one of the major French newspapers, wrote that Apple is refreshing its iOS, but no revolution has been announced. This was a common pattern found in the non-technical press.

This fails to measure how radical the change in iOS and OSX was for the development community. Apple did open a new set of API, allowing interapplication communication and custom widgets for the notification center. It also created a new language to improve the overall platform for developers. More generally, most of the 100+ talks at WWDC were about granting more power to developers on various levels, improving API, and solving common and old technical limitations. Example from a random conference: Core data performance has been improved on two use cases—batch update and async fetch. This announcement alone is a reply to a complaint on core data performance for mass record changes; see On switching away from Core Data by Brent Simmons.

At the conference, there was general agreement that nearly all the wishes and complaints from Apple developers had been satisfied. Some said it was the most important Apple developer event in 10 years, with beta testing, performance, more powerful API, and even a new modern language called “Swift”. In a single week, the Apple development landscape radically changed. Marco Arment, a high-profile developer, noted that Apple opened new territory. Casey Liss said that during this conference, Apple changed its mindset, and its new message was about building a platform together with the development community.

WWDC is a developer conference, and it is getting back to its roots. The media was expecting hardware or software announcements. Apple has announced new hardware in the past at WWDC (examples are iPhone 3GS or iPhone 4) but was disappointed by announcements targeted at the developer community. However, it happens that the developers are the ones building the ecosystem. What Apple gives them is the tools that will make the platform much better and more powerful in the coming years. It will take a while to have everyone notice, but it is a profound change. This is a seed to improve the application ecosystem on iOS for the coming years.

“Swift” impact

Apple surprised the public with an announcement of a brand-new programming language. Despite it having being in development for four years, Apple managed to keep the secret until now. Launching a new programming language will have a huge impact on the development community. For developers, it was like Christmas in June.

That said, the impact was, again, largely misunderstood. It was described as a way to make the development with iOS simpler and accessible to more developers. However, having assisted at many talks on Swift, it is clear to me that the goal is not to appeal to a mass of new developers. Google is using Java—one of the most taught languages in the world—on this platform, and there is no way to compete with the ubiquitous knowledge of that platform among developers. Code factories are mostly based on Java skills, and Java is a programming language with a massive amount of manpower behind it.

What Apple proposes with Swift is that it be a language that is efficient, making it very adequate for mobile environments. It also is a language that is much more expressive and enjoyable to write than Objective C, the existing de facto language for iOS and OSX programming.

What Swift is not is a simple language. It is extremely expressive and powerful—and, as such, it requires a deep understanding of many programming paradigms (object oriented, functional). Developing for iOS and OSX requires knowledge of the many features available in the frameworks. No matter how you express your code, learning those frameworks takes time. You will still need a great deal of skills to write mobile apps.

Swift is there because Apple wants to help attract good developers—not simply a mass of developers. Apple wants to appeal to developers that are constantly looking for a better way to express their code and improve the performance and maintainability of their software.

Reading the first articles and analysis on Swift is by no means about “writing iOS or OSX software in 21 days”. It is profound how the articles analyze the influence and semantics of the language, and what design choices were made. For example, these are the first thoughts from the original lead developer of Rust, a language that was an inspiration for Swift. This is an example of the type of articles coming from professionals that want to use a state-of-the-art programming environment in a practical way. For example, Evan Miller explains how he found it more practical than Haskell, while inheriting some of its benefit in his piece, Swift impressions. Haskell is a language acclaimed for its properties, but it is often said to be used more in academic environments than by programmers for typical mobile or web software.

With Swift, what Apple wants is to do is attract and keep the best mobile developers working on iOS.

More to come on post-WWDC analysis

I will write more later on implications and expected improvements for ProcessOne and Boxcar software. I need time to think more about all the pieces of information I gathered during five days of talks and discussion with developers.

by Mickaël Rémond at June 11, 2014 08:56

June 08, 2014

Ignite Realtime Blog

(a)Smack 4.0.0 released

5 months after the relase of Smack 3.4.1 the Ignite Realtime developer community is proud to annouce the first release of Smack 4, which marks a milestone in the development history of Smack. Smack has undergone a major overhaul and refactoring, including moving from Ant to Gradle and from SVN to git.

 

Smack 4 also includes security related fixes. Users are encouraged to update  as soon as possible.

 

Many people have helped to develop this release. We especially would like to thank

 

- Ryan Sleevi of the Google Chrome Security Team for reporting a security flaw in ServerTrustManager (SMACK-410)

- Thijs Alkemad for reporting a security flaw regarding IQ spoofing (SMACK-533, SMACk-538)

- Lars Noschinski for fixing the IQ spoofing flaws and adding support for roster versioning (SMACK-399)

- Jens Offenbach for helping making Smack an OSGi bundle (SMACK-343)

 

Since the API has changed in Smack 4, make sure to read the "Smack 4.0 Readme and Upgrade Guide".  A full changelog can be found in JIRA.

by Ignite Realtime Blog (communityadmin@igniterealtime.org) at June 08, 2014 12:03

June 02, 2014

Thijs Alkema

CVE-2014-1361: SecureTransport buffer overflow

Today, Apple released a fix to CVE-2014-1361 in SecureTransport. The essence of this bug is this: the TLS record parser would interpret a DTLS record even when using normal TLS, causing a buffer overflow when parsing a record header. I reported this issue to Apple on May 28th.

To summarize, the impact of this bug is small: it can disclose 2 specific bytes of plain text to an attacker. Doing this will also cause the connection to be closed. It can also give an attacker the ability of carrying out a replay attack, with a probability of success of 2-16 (~0.0015%).

TLS vs DTLS

DTLS and TLS send their payloads in separate records of up to 214 bytes, where each record has a header. For TLS this header is 5 bytes: 1 byte payload type, 2 bytes TLS version number and 2 bytes indicating length of the rest of the record.

(Aside: Why every record includes two extra bytes to include the version is not exactly clear to me. I haven’t ever seen it legitimately change except during the handshake, where the client would initiate with a TLS 1.0 record, but include that it supports up to TLS 1.2, and then switch to TLS 1.2 after the server replies using that version.)

DTLS records are similar, but these are 13 bytes instead: in between the version number and the length it includes a sequence counter. Contrary to TLS, DTLS was designed to use datagrams (like UDP), so it doesn’t require reliable or in-order delivery. To still be able to decrypt records and know their intended order, the sequence counter is included on every record. TLS also uses a sequence counter (to prevent attackers from reordering messages), but it is implicit. Both parties simply count how many messages they have received or sent.

Record parsing in SecureTransport

This is how Apple’s code used to parse these records:

SSLRecordReadInternal.clink

static int SSLRecordReadInternal(SSLRecordContextRef ref, SSLRecord *rec)
{ int err;
size_t len, contentLen;
uint8_t *charPtr;
SSLBuffer readData, cipherFragment;
size_t head=5;
int skipit=0;
struct SSLRecordInternalContext *ctx = ref;
if(ctx–>isDTLS)
head+=8;
if (!ctx–>partialReadBuffer.data || ctx–>partialReadBuffer.length < head)
{ if (ctx–>partialReadBuffer.data)
if ((err = SSLFreeBuffer(&ctx–>partialReadBuffer)) != 0)
{
return err;
}
if ((err = SSLAllocBuffer(&ctx–>partialReadBuffer,
DEFAULT_BUFFER_SIZE)) != 0)
{
return err;
}
}
if (ctx–>negProtocolVersion == SSL_Version_Undetermined) {
if (ctx–>amountRead < 1)
{ readData.length = 1 ctx–>amountRead;
readData.data = ctx–>partialReadBuffer.data + ctx–>amountRead;
len = readData.length;
err = sslIoRead(readData, &len, ctx);
if(err != 0)
{ if (err == errSSLRecordWouldBlock) {
ctx–>amountRead += len;
return err;
}
else {
/* abort */
err = errSSLRecordClosedAbort;
#if 0 // TODO: revisit this in the transport layer
if((ctx->protocolSide == kSSLClientSide) &&
(ctx->amountRead == 0) &&
(len == 0)) {
/*
* Detect "server refused to even try to negotiate"
* error, when the server drops the connection before
* sending a single byte.
*/
switch(ctx->state) {
case SSL_HdskStateServerHello:
sslHdskStateDebug("Server dropped initial connection\n");
err = errSSLConnectionRefused;
break;
default:
break;
}
}
#endif
return err;
}
}
ctx–>amountRead += len;
}
}
if (ctx–>amountRead < head)
{ readData.length = head ctx–>amountRead;
readData.data = ctx–>partialReadBuffer.data + ctx–>amountRead;
len = readData.length;
err = sslIoRead(readData, &len, ctx);
if(err != 0)
{
switch(err) {
case errSSLRecordWouldBlock:
ctx–>amountRead += len;
break;
#if SSL_ALLOW_UNNOTICED_DISCONNECT
case errSSLClosedGraceful:
/* legal if we're on record boundary and we've gotten past
* the handshake */
if((ctx–>amountRead == 0) && /* nothing pending */
(len == 0) && /* nothing new */
(ctx–>state == SSL_HdskStateClientReady)) { /* handshake done */
/*
* This means that the server has disconnected without
* sending a closure alert notice. This is technically
* illegal per the SSL3 spec, but about half of the
* servers out there do it, so we report it as a separate
* error which most clients – including (currently)
* URLAccess – ignore by treating it the same as
* a errSSLClosedGraceful error. Paranoid
* clients can detect it and handle it however they
* want to.
*/
SSLChangeHdskState(ctx, SSL_HdskStateNoNotifyClose);
err = errSSLClosedNoNotify;
break;
}
else {
/* illegal disconnect */
err = errSSLClosedAbort;
/* and drop thru to default: fatal alert */
}
#endif /* SSL_ALLOW_UNNOTICED_DISCONNECT */
default:
break;
}
return err;
}
ctx–>amountRead += len;
}
check(ctx–>amountRead >= head);
charPtr = ctx–>partialReadBuffer.data;
rec–>contentType = *charPtr++;
if (rec–>contentType < SSL_RecordTypeV3_Smallest ||
rec–>contentType > SSL_RecordTypeV3_Largest)
return errSSLRecordProtocol;
rec–>protocolVersion = (SSLProtocolVersion)SSLDecodeInt(charPtr, 2);
charPtr += 2;
if(rec–>protocolVersion == DTLS_Version_1_0)
{
sslUint64 seqNum;
SSLDecodeUInt64(charPtr, 8, &seqNum);
charPtr += 8;
sslLogRecordIo("Read DTLS Record %016llx (seq is: %016llx)",
seqNum, ctx–>readCipher.sequenceNum);
/* if the epoch of the record is different of current read cipher, just drop it */
if((seqNum>>48)!=(ctx–>readCipher.sequenceNum>>48)) {
skipit=1;
} else {
ctx–>readCipher.sequenceNum=seqNum;
}
}
contentLen = SSLDecodeInt(charPtr, 2);
charPtr += 2;
if (contentLen > (16384 + 2048)) /* Maximum legal length of an
* SSLCipherText payload */
{
return errSSLRecordRecordOverflow;
}
if (ctx–>partialReadBuffer.length < head + contentLen)
{ if ((err = SSLReallocBuffer(&ctx–>partialReadBuffer, head + contentLen)) != 0)
{
return err;
}
}
if (ctx–>amountRead < head + contentLen)
{ readData.length = head + contentLen ctx–>amountRead;
readData.data = ctx–>partialReadBuffer.data + ctx–>amountRead;
len = readData.length;
err = sslIoRead(readData, &len, ctx);
if(err != 0)
{ if (err == errSSLRecordWouldBlock)
ctx–>amountRead += len;
return err;
}
ctx–>amountRead += len;
}
check(ctx–>amountRead >= head + contentLen);
cipherFragment.data = ctx–>partialReadBuffer.data + head;
cipherFragment.length = contentLen;
ctx–>amountRead = 0; /* We've used all the data in the cache */
/* We dont decrypt if we were told to skip this record */
if(skipit) {
return errSSLRecordUnexpectedRecord;
}
/*
* Decrypt the payload & check the MAC, modifying the length of the
* buffer to indicate the amount of plaintext data after adjusting
* for the block size and removing the MAC */
check(ctx–>sslTslCalls != NULL);
if ((err = ctx–>sslTslCalls–>decryptRecord(rec–>contentType,
&cipherFragment, ctx)) != 0)
return err;
/*
* We appear to have sucessfully received a record; increment the
* sequence number
*/
IncrementUInt64(&ctx–>readCipher.sequenceNum);
/* Allocate a buffer to return the plaintext in and return it */
if ((err = SSLAllocBuffer(&rec–>contents, cipherFragment.length)) != 0)
{
return err;
}
memcpy(rec–>contents.data, cipherFragment.data, cipherFragment.length);
return 0;
}

head determines how many bytes the header should contain. charPtr points to the current position in the record. rec is a structure describing the record we’re parsing. ctx is the session context.

Line 195 correctly uses ctx->isDTLS, but line 309 uses rec->protocolVersion, which got parsed on line 306. This is data that just came from the network and has not been validated in any way. There are no checks to make sure rec->protocolVersion == DTLS_Version_1_0 is only true when ctx->isDTLS.

This means that an attacker can change the version number on a single record from a TLS version to DTLS 1.0 to make a user execute the if block on line 309, even though they are using a TLS connection. That might make it possible to modify the sequence counter.

Reordering attacks

The sequence counter in TLS is used to make it impossible for an attacker to remove messages, reorder messages or replay previous messages. The sequence counter is included in the MAC, which means the message will not validate when it isn’t in its original place in the sequence. Due to the bug in the code above, the attacker may be able to modify this sequence counter. What an attacker can do with that is hard to determine: it depends a lot on the exact fragmentation of the payload into records.

In HTTPS, for example, an attacker may try to make some JavaScript execute differently, but if the entire script fits in one record then there’s not much an attacker could do. The most efficient way to send webpages or scripts would be to make as few records as possible, as padding and MAC add overhead per record. This means fragmenting the data every 214 bytes = 16 KiB (except for a bit of room for the MAC). By comparison, the current version of jquery is 82 KiB. That would fit in 6 records, giving any attacker very few options to shuffle those fragments around, many of these will probably not even parse as valid JavaScript.

In more real-time protocols like IRC or XMPP (yes, of course I have to bring up XMPP again), the fragmentation is a lot easier to understand: these will include a few complete protocol packets within each record (often just 1). Having a malicious impact here will be a lot easier: an attacker would be able to drop a single chat message, retransmit one, reorder them, etc.

Rewriting the sequence number

Trying to exploit this, I quickly ran into the following problem: only 5 bytes of the record had been copied from the socket, so the SSLDecodeUInt64 call will read 2 bytes from the record, but 6 bytes past that too. This does makes it possible to make sure the epoch matches (the two highest bytes of the sequence number), but the 6 next bytes are “random” data.

Looking a little closer, the next 6 bytes didn’t turn out to be random at all. The buffer records are read into gets reused (except when a record has too much payload to fit in the current buffer, then a new one is allocated) and decryption of the record happens in-place in this buffer. So when I tried to exploit this using a HTTPS server which had previously sent a reply starting with HTTP/1.1 200 OK, Safari ended up interpreting HTTP/1 as the sequence number. The length field of the record should follow the sequence number, so it interpreted .1 as its length.

TLS 1.0

I tried a lot of variations, setting up some plaintext in the buffer first and then trying to reinterpret that as the sequence counter, until finally I realized what I was trying to do wasn’t possible with TLS 1.0: all the ciphers I was trying used more inter-record state than just the sequence counter. CBC mode means the decryption of every record depends on the ciphertext of the previous record, so reordering would never work. RC4 keystreams are also inherently statefull. As TLS uses MAC-then-Encrypt (MtE), these records will decrypt to gibberish and then fail the MAC. If TLS had used Encrypt-then-MAC (EtM) here (which a lot of cryptographers nowadays consider the better choice), the MAC would have succeeded, after which the record would have decrypted to gibberish. That gibberish would’ve been passed to the application, as the TLS layer would not have been able to detect anything wrong with it.

TLS 1.1+

TLS 1.1 and TLS 1.2 don’t have that problem: these add an explicit IV to every record to prevent attacks like BEAST. For compatibility with TLS 1.0, this is usually implemented by prepending a block of random data to the plaintext and including that in the encryption. The IV that is used to encrypt this new first block doesn’t matter: it only influences the plaintext of the first block, which is deleted by the receiver after decryption. It doesn’t even need to be the case that the receiver decrypts the first block to the same thing as the sender used. So here every record can be decrypted independently, even when inserted at a random other position in the sequence. In practice, the IV that is used as the IV for the first block is often still the ciphertext of the last block of the previous record, as that makes it easier to be compatible with TLS 1.0 while not being vulnerable to BEAST.

However, this also meant that the sequence number was no longer the ASCII encoding of HTTP/1 (or the first 6 bytes of whatever record was last), but it is now the decryption of the IV block. As this block gets chosen randomly and the server and client don’t even need to decrypt it to the same thing, trying to influence this block to contain just the sequence number I want turned out to be impossible.

My next thought would be to send a record with a wrong epoch first, which would be used to fill the buffer with the data I need and then send another record with a DTLS header that would be used to overwrite the sequence counter. In DTLS, the epoch is indicated by the two upper bytes of the sequence counter. Records with an epoch different from the epoch of the current sequence counter are skipped (decryption or authentication isn’t attempted).

However, this just moved the problem backwards: the length of this new record is still taken from the data still in the buffer, so the decrypted IV of the previous record. Even though this new record will not be decrypted, SecureTransport must read it completely first, and I don’t know what length it expects. Guessing would have a 1 in 216 chance of succeeding, which is large cryptographically speaking, but not quite practical. It might be possible to increase this chance by repeating the inserted record over and over, but then the attacker can only insert one record, as the next copy will fail to decrypt.

AES-GCM

I believe AES-GCM would be vulnerable to this, as it uses the sequence number as an implicit IV, though I haven’t checked. While SecureTransport has an (at least partial) implementation of AES-GCM, it wasn’t advertised by Safari, so I’m assuming its unfinished.

Leaking bytes

Another avenue of exploitation would be to try to retrieve some information about the plaintext still in the buffer. As I mentioned in my HTTPS example .1 from HTTP/1.1 200 OK was interpreted as the length of the next record. The ASCII representation of .1 interpreted as a number gives 11825. This means SecureTransport will try to read 11825 more bytes before starting to decrypt it (which will then fail the MAC, causing it to send an alert and close the connection). We can also do this the other way around: we write bytes one by one until SecureTransport closes the connection and from that we will know the 7th and 8th byte of the plaintext of the previous record!

However, the value of the two bytes has to be less than the maximum record size of 214 (while it can be up to 216), as otherwise SecureTransport will reject it for being too large. This means that the first character must have an ASCII representation of less than @, which means it can’t be any of the upper- or lowercase letters, but numbers and a few other punctuation characters would work.

Closing thoughts

After heartbleed, this is another bug that exploits a DTLS code path that should never be used when using TLS. Impact is even similar too: disclosing some contents of the other side’s memory. However, this is only limited to 2 bytes, while heartbleed could retrieve 64 KiB per heartbeat. I guess DTLS has its uses, but maybe implementors should consider whether covering both DTLS and TLS in one library is worth the extra complexity of security-critical code.

A discovery that surprised me is the way SecureTransport deals with its internal buffers. The buffer records are read into and where the result of their decryption is stored are never erased, there’s only malloc and free. Buffers grow when they need to receive a larger record, but they never decrease in size again for as long as the connection is open. This means long-lived TLS connections waste a lot of memory when they receive a single large record. The plaintext of that record will stay in memory for as long as the connection is open.

June 02, 2014 18:10

May 30, 2014

The XMPP Standards Foundation

Upcoming events

Until our new website launches with a dedicated ‘whats on’ section, I wanted to share some events coming up that might be of interest to our community.

Rikard Strid of Clayster is speaking at 2 events in June, focussing in on XMPP and IoT.

Fo those of you over in America (or for those that now have an excuse to head that way!) here are the details:

12 June: IOT Expo in NYC

16 June: IOT World in Silicon Valley

Hopefully we can get a blog post from Rikard after the events, to share the goodness with everyone that couldn’t attend.

 

by laura at May 30, 2014 08:23

May 25, 2014

Ignite Realtime Blog

(a)Smack 4.0.0-rc2 released

Six weeks after the release of the first Release Candidate (-rc1) of Smack 4, the Ignite Realtime Community is proud to announce the release of the second and likely final Release Candidate.

 

Smack 4.0.0-rc2 contains many improvements and bug fixes. The API underwent some major changes and is considered stable. Now is the perfect time to test (a)Smack 4.0 if you haven't already. Smack is available from Maven Central (direct link). aSmack can be obtained from http://asmack.freakempire.de/4.0.0-rc2/

 

Make sure to read the upgrade guide and the previous blog post about Smack 4.

by Ignite Realtime Blog (communityadmin@igniterealtime.org) at May 25, 2014 08:57

May 21, 2014

Peter Saint-Andre

RFCs 7247 and 7248

A long time ago in an Internet far, far away, there happened a series of skirmishes known as the Instant Messaging Protocol Wars, involving brave warriors from the SIP and XMPP communities. Words were exchanged, epithets were hurled, swords were drawn, hand-to-hand combat ensued at IETF meetings, and much blood was shed. All for naught - as with most wars - because the real enemy (proprietary systems) invaded vast swaths of territory in the meantime.

May 21, 2014 00:00

May 19, 2014

The XMPP Standards Foundation

Happy Encrypted Network!

Today, a large number of services on the public XMPP network permanently turned on mandatory encryption for client-to-server and server-to-server connections (there’s a fine summary here). This is the first step toward making the XMPP network more secure for all users. Stay tuned for more updates as we work on ubiquitous authentication, secure DNS, end-to-end encryption, and other improvements.

by stpeter at May 19, 2014 17:18

ProcessOne

Boxcar Large Scale Developer Push Service Available to All Developers

ProcessOne and Boxcar have been building push services for real-time notifications since the launch of the push feature on iOS. Our core business has been since that time to deploy large scale notification services for carriers, broadcasters, ensuring very high availability, very fast delivery in a cost-effective way.

A massive scale push service now available for everyone

This service has only been available for select large scale customers until now.

We are opening it today to everyone, including smaller developers. It offers many benefits that can attract startups and smaller brands:

  • Our SDK is tailored to be very small and lightweight. We do not want you to build a huge stack of services you will likely not use to plug on our platform.
  • Our platform is cost-effective and offers fixed priced plan for unlimited number of messages. Our technology allow us to control the numbers of workers, redundancy, speed, message priority in realtime. Thus, you pay a fixed price for an unlimited number of messages and can select which delivery speed you need considering your number of devices and the criticity of your typical notifications.
  • We can grow and scale with you. The same platform works for small companies to huge broadcasters.
  • Our platform offers unique realtime analytics feature to see how people are reacting to a given notification in realtime.
  • Our platform goes beyond standard phone push notification with an in-app notification channel. You can receive notifications from within your app that do not need to go through Apple or Google services.
  • Our platform supports iOS and all flavour of Android push services (Google GCM, Amazon ADM, Nokia Push Messaging).

What’s the deal ? What’s the plan ?

We have refactored a large portion of our large scale service and added self-service configuration and credit card payment. You can freely create an account and setup your project. Once everything is working as expected in your mobile app, setup your credit card information (Using Recurly Payment system) and you are ready to go live.

At the moment pricing is very simple:

  • Development plan is free (up to 200 devices).
  • Startup / independant developers plans are as follow:

    • 500 pushes per minute, unlimited monthly pushes and devices: 7 euros per month
    • 750 pushes per minute, unlimited monthly pushes and devices: 30 euros per month
    • 1000 pushes per minute, unlimited monthly pushes and devices: 100 euros per month
    • 1500 pushes per minute, unlimited monthly pushes and devices: 200 euros per month

We can of course handle much larger scale to suit the needs of larger corporations, broadcasters and carriers, with delivery speed up to 3 million pushes per minute.

However, the higher plans need to go through setting invoice / payment system as monthly price goes higher than usual charges on credit card. Contact us and we can make that happen.

We did not yet made available all our features, like customer segmentation or geotarget push, but those features are coming in due time.

You can expect the team to be helping on a daily basis to improve FAQ, documentation and the console itself to help you understand push notifications and make sure you make the best of it to grow your business.

If you are developer of an iOS and / or an Android application, please, go ahead and create your free developer account on Boxcar Developer Console.

We are here to assist you !

by Mickaël Rémond at May 19, 2014 15:19

ejabberd Community 14.05

ejabberd Community 14.05: the culmination of a year of change

Before getting into technical details of version 14.05 changes, let’s summarize an amazing year of ejabberd development.

Last year we made major changes in our development, release and support process.

ejabberd now has two faces:

  • ejabberd community is now improving at a very fast pace with changes coming from the community. That version improved a lot over the year. It has a lower memory footprint, gained many new features and several XEP support. We switched to modular rebar build system. Documentation has been improved. Overall, it is a great basis to build innovative solutions.

  • ejabberd commercial is more stable and scalable than ever and we have pushed its scalability both in term of number of supported nodes than in term of users supported on a single machine. ProcessOne is managing more and more deployments for our customers with it, and that kind of partnership with our customer just works, making every one happier. Rock solid platform managed by a team of experts.

For this latest release, we are very happy to see two new major contributors, Holger Weiß and Tsukasa Hamano. Congratulations!

Now, we are going further, exploring the realm of Voice Over IP and SIP. ejabberd was the reference on messaging and now if can help you place calls over SIP. Please, read that again :)

We have integrated a SIP proxy / Registrar in ejabberd that makes possible, using the same credentials, to pass SIP calls with a SIP client as well (for example your Android phone). We had a STUN service and integrated TURN to make VoIP easier in most contexts. This is just the beginning and we are waiting for your feedback to make things even simpler.

Note: ejabberd is also still compliant with Jingle pure-XMPP VoIP. It is just a matter of choice. We let you use the protocol you prefer in to pass your call. However, we do not bridge SIP and Jingle. This is a pain and in most deployments only one protocol will be used.

And finally, in a world where security is critical, we tightened our security to increase the default level of robustness of crypto algorithm used.

Enjoy!

ejabberd Community 14.05 has great new features, several improvements and many bugfixes over the previous 13.12 release:

ejabberd now includes support for:
- XEP-0198: Stream Management (EJAB-532)
- XEP-0321: Remote Roster Management (EJAB-1381)
- RFC-3261: SIP proxy/registrar
- RFC-5766: TURN: Traversal Using Relays around NAT (EJAB-1017)

There are several improvements regarding encryption:
- Add option to specify openssl options
- Fix extraction of host names from certificates
- Fix certificate authentication for incoming s2s connections
- Fix handling of certificate verification errors for incoming s2s
- Handle “s2s_use_starttls: required_trusted” the same way for outgoing
- Support certificate verification for outgoing s2s connections
- Check TLS state before requesting SASL EXTERNAL
- Log TLS status for outgoing s2s with SASL EXTERNAL
- Verify host name before offering SASL EXTERNAL

Just to mention other improvements:
- New ejabberd command: disconnect_user/2
- New Bash completion script for ejabberdctl, experimental (EJAB-1042)
- Don’t provide current password in webinterface
- mod_register_web: check same acl as mod_register.
- Document and enable mod_carboncopy (XEP-0280) by default
- Make it possible to get/set vCards for MUC rooms
- Add Travis CI configuration file

And many many bugfixes all over the source code, most of them were introduced when ejabberd was updated to use binaries.

We would like to thank specially Holger Weiß for his XEP-0198 feature and varied bugfixing, and Tsukasa Hamano for his bugfixes.

This release requires at least Erlang/OTP R15, and works perfectly with R16B03. It should work correctly also with the new R17.

As usual, the release is tagged in the Git source code repository on:
https://github.com/processone/ejabberd

The source package and binary installers are available at ProcessOne:
http://www.process-one.net/en/ejabberd/downloads/

If you suspect you found a bug, search or fill a bug report in Jira:
https://support.process-one.net/browse/EJAB

by Christophe Romain at May 19, 2014 13:28

Prosodical Thoughts

Mandatory encryption on XMPP starts today

Last year Peter Saint-Andre laid out a plan for strengthening the security of the XMPP network. The manifesto, to date signed by over 70 XMPP service operators and software developers, offered a rallying point for those interested in ensuring the security of XMPP for its users.

Today is the date that the manifesto gave for the final 'flip of the switch': as of today many XMPP services will begin refusing unencrypted connections. If you run an XMPP service, we encourage you to do the same. On the xmpp.org wiki you can find instructions for all the popular XMPP server software. While XMPP is an open distributed network, obviously no single entity can "mandate" encryption for the whole network - but as a group we are moving in the right direction.

If you use an XMPP service provided by someone else and you encounter problems contacting family, friends or colleagues starting from today, it may be a sign that either your XMPP service or theirs is not properly supporting encryption. Contact the administrator of your service and let them know about this change. You can also use xmpp.net to test any server.

We still have some way to go, for example today's change only ensures encryption (enough to beat passive capturing of traffic), it does not require you to have a valid certificate issued by a certificate authority (though some services do already choose to require this).

There is a whole lot of work being done to pave the way for a future without CAs, as they are a sticking point for many people - whether for financial, trust, privacy or philosophical reasons. Some current initiatives include DNSSEC, Monkeysphere, and some folks prefer to trust nothing less than hand-verified fingerprints! We already have experimental plugins available in prosody-modules for these things (mod_s2s_auth_dane, mod_s2s_auth_monkeysphere, mod_s2s_auth_fingerprint, etc.). If this is something you are interested in, take a look, help us test, and perhaps contribute code even!

Further reading:

by The Prosody Team at May 19, 2014 10:31

May 16, 2014

ProcessOne

Releasing Enhanced Security Debian AMI

We have decided to share with AWS community our basic linux platform. It is based on Debian and includes some security enhancements, that comes from integrating grsecurity into kernel. Previously i have released only kernel builds known as ESK kernel, now we are presenting whole Debian that includes following changes:

  • ESK kernel 3.2.58
  • gradm 3.0
  • paxctl 0.8
  • TPE
  • special groups for TPE: untrusted, readproc, symlinkrestr
  • RBAC ready (disabled by default)
  • performance modifications, see sysctl.conf
  • default root filesystem is XFS
  • you can build your own kernel, see dirty script in /usr/src/

AMI named debian-7.5-amd64-grsec-enhanced-security is available in US-East (ami-64dc300c) and EU-West (ami-818747f6) regions. After starting instance you can login into root account. Uses who need to have root access need to belong to special group ‘admin‘. For more information on using grsecurity kernel see documentation. We will be providing updates to the AMI when it will be necessary. More information about ESK can be found here, also you can track us for updates on twitter: ProcessOne or me.

by Zbyszek Żółkiewski at May 16, 2014 12:15

May 15, 2014

Peter Saint-Andre

A Thoreauvian Theme

As hinted in my recent post on working through the writings of various philosophers, I think I've found an intriguing theme for the short book (Walking With Thoreau) that I aim to complete in time for the Thoreau bicentennial in 2017. The first two sentences of Thoreau's essay Walking provide a clue:

May 15, 2014 00:00

RFC 7259

Many years ago, I proposed a "Jabber-ID" email header so that you could advertise your Jabber/XMPP address in the email messages you send. Somehow it got caught up in the IETF politics of the time and was never standardized. Yet an email header doesn't need to be defined in a standards-track RFC in order to be added to IANA's provisional registry for email header field names (any old specification will do). Although draft-saintandre-jabberid existed in a lapsed state for years, it seemed better to make the document an informational RFC through the independent stream because RFCs are more stable and more referenced than stale Internet-Drafts. Thus RFC 7259. Enjoy!

May 15, 2014 00:00

May 09, 2014

Peter Saint-Andre

RFC 7259

Recently Keith Nerdin asked me on Twitter what I mean by saying that I "work through" an author, as I am doing now with both Nietzsche and Thoreau. Because I found it difficult to capture the entire process in 140 characters, here's a longer treatment of the subject.

May 09, 2014 00:00

May 06, 2014

Ignite Realtime Blog

Openfire 3.9.3 has been released

The Ignite Realtime Community is pleased to announce that Openfire version 3.9.3 is available for download!

 

Openfire is a real time collaboration (RTC) server licensed under the Open Source Apache license. It uses the only widely adopted open protocol for instant messaging, XMPP (also called Jabber). Openfire is incredibly easy to setup and administer, but offers rock-solid security and performance.

 

This release corrects the regressions found with the 3.9.2 release, which include:

 

  • [ OF-782] - Wrong URL generated for editing groups with space in the names
  • [OF-783] - Apply encryption to secure properties during setup (updating openfire backended by LDAP would fail)
  • [OF-787] - TLS server to server connections are not working with 3.9.2
  • [OF-791] - Joining new MUC room results in a 404 error
  • The initial 3.9.2 release had a packaging problem with the windows installer.

 

A full changelog can be found here.

 

We'd invite interested developers to fork our github repository and contribute pull requests with your fixes.  Ongoing discussions are happening in the community forums about the future of Openfire development.  Please join in if you are interested!

 

Please report bugs with this release in the community forumsPlease do not report bugs by commenting on this blog post!

 

Here are md5sum's for the released files:

 

md5sumfilename

59d8bd304397850d9b229d800aad3295

openfire_3_9_3.dmg
872f728b1b2d43407492452c6c323166openfire_3_9_3.exe
60823e2ccc79165992f3298971134095openfire_3_9_3.tar.gz
1f425ed151762ae689bfab472ce26605openfire_3_9_3.zip
a5d43cfd91785b269e50814e5cd71b8eopenfire_src_3_9_3.tar.gz
987a494caca5dd770dd3ed38e3f753c8openfire_src_3_9_3.zip
fe514555792d82f30f0c063f2e11ff9bJSopenfire-3.9.3-ALL.pkg.gz
742ca4a8b2b971176feb9e8d1ad26e63openfire-3.9.3-1.i386.rpm
76a0f776025227799e862955e0f40018openfire-3.9.3-1.src.rpm
a71be983670491cf280e7ef02315f234openfire_3.9.3_all.deb

by Ignite Realtime Blog (communityadmin@igniterealtime.org) at May 06, 2014 19:25

May 01, 2014

Ignite Realtime Blog

Openfire 3.9.2 has been released

The Igniterealtime Community is pleased to announce that the release of Openfire version 3.9.2 is available for download

 

Openfire is a real time collaboration (RTC) server licensed under the Open Source Apache license. It uses the only widely adopted open protocol for instant messaging, XMPP (also called Jabber). Openfire is incredibly easy to setup and administer, but offers rock-solid security and performance.

 

This release contains a large number of fixes (70 Jira issues resolved) aimed at increasing stability, security and XMPP standards compliance.  A full changelog can be found here.  Some of the highlights of this release are:

 

[ OF-103] - [MUC] Allow nicknames to be used more than once in the same room by a single user

[ OF-114] - Clearing cache can lock up MUC

[ OF-455] - Some unicode pattern in status message can break the session connection

[ OF-669] - Visually failed first login to Admin Console

[ OF-714] - Add ability to encrypt properties so they are encrypted in the db and do not appear in the admin console.

[ OF-745] - Use TLS-dialback even if that mechanism is not advertised

[ OF-757] - Allow s2s message of subdomain of XMPP domain when no components are found

[ OF-569] - Add deluser adhoc command

[ OF-764] - Group chat history (MUC) should match configuration after server restart

[ OF-771] - MUC service should flush recent history before shutting down

[ OF-125] - Restrict discovery of rooms based on users membership

[ OF-297] - fix: mutual roster deletion problem

[ OF-770] - CVE-2014-2741 Uncontrolled Resource Consumption with XMPP-Layer Compression

[ OF-722] - Openfire should save XEP-0184 delivery receipts as offline message

[ OF-758] - Add support for XEP-0280 "Message Carbons"

 

 

This is the first release of Openfire after our migration of code development to Github.  We'd encourage interested developers to fork Openfire and send pull requests!  It is best for you to create a dedicated branch on your fork to submit the eventual pull request from. Please note that we are not syncing the github repository to the previous openfire subversion repository hosted on igniterealtime.

 

Here are MD5sums for the downloads available.

 

md5sumFilename

dba5d987f3473c59546b24312e6bbc23

JSopenfire-3.9.2-ALL.pkg.gz
a215348924c32b5b8aa84bba27355288openfire-3.9.2-1.i386.rpm
34a21c2e48ab9358bb187b5f151a3ed4openfire-3.9.2-1.src.rpm
6be23cd0822b7dbfab51c663a2df0585openfire_3.9.2_all.deb
6539489a7760a8f031e13f53453a3be5openfire_3_9_2.dmg
5601aff0fd9b1d8eb1c382013a9f8ea3openfire_3_9_2.exe (original with pack issue)
ab3fccaf684e478cebd992cf93b6ff2dopenfire_3_9_2.exe (updated)
0b4fab9f9e4834be4e747dc0fc47cff7openfire_3_9_2.tar.gz
886d6311429c382ec0655b03d692379fopenfire_3_9_2.zip
f3e554025abcb7e9b46a70dfb24caf2copenfire_src_3_9_2.tar.gz
0fe17ce148e32c8ce934dd152e64c6f8openfire_src_3_9_2.zip

 

As always, we welcome your feedback, suggestions, tips, hints, questions and other contributions in the Igniterealtime Community Forums. Please do not respond to this blog post with questions.

by Ignite Realtime Blog (communityadmin@igniterealtime.org) at May 01, 2014 02:36