Planet Jabber

June 19, 2018

Paul Schaub

Summer of Code: The demotivating week

I guess in anybodies project, there is one week that stands out from the others by being way less productive than the rest. I just had that week.

I had to take one day off on Friday due to circulation problems after a visit at the doctor (syringes suck!), so I had the joy of an extended weekend. On top that, I was not at home that time, so I didn’t write any code during these days.

At least I got some coding done last week. Yesterday I spent the whole day scratching my head about an error that I got when decrypting a message in Smack. Strangely that error did not happen in my pgpainless tests. Today I finally found the cause of the issue and a way to work around it. Turns out, somewhere between key generation and key loading from persistent storage, something goes wrong. If I run my test with fresh keys, everything works fine while if I run it after loading the keys from disk, I get an error. It will be fun working out what exactly is going wrong. My breakpoint-debugging skills are getting better, although I still often seem to skip over important code points during debugging.

My ongoing efforts of porting the Smack OX code over from using bouncy-gpg to pgpainless are still progressing slowly, but steady. Today I sent and received a message successfully, although the bug I mentioned earlier is still present. As I said, its just a matter of time until I find it.

Apart from that, I created another very small pull request against the Bouncycastle repository. The patch just fixes a log message which irritated me. The message stated, that some data could not be encrypted, while in fact date is being decrypted. Another patch I created earlier has been merged \o/.

There is some really good news:
Smack 4.4.0-alpha1 has been released! This version contains my updated OMEMO API, which I have been working on since at least half a year.

This week I will continue to integrate pgpainless into Smack. There is also still a significant lack of JUnit tests in both projects. One issue I have is, that during my project I often have to deal with objects, that bundle information together. Those data structures are needed in smack-openpgp, smack-openpgp-bouncycastle, as well as in pgpainless. Since smack-openpgp and pgpainless do not depend on one another, I need to write duplicate code to provide all modules with classes that offer the needed functionality. This is a real bummer and creates a lot of ugly boilerplate code.

I could theoretically create another module which bundles those structures together, but that is probably overkill.

On the bright side of things, I passed the first evaluation phase, so I got a ton of motivation for the coming days :)

Happy Hacking!

by vanitasvitae at June 19, 2018 18:47

June 17, 2018

Ignite Realtime Blog

Smack 4.3.0-rc1 and 4.4.0-alpha1 released

@Flow wrote:

The Smack developer community is proud to announce the availability of the first release candidate of Smack 4.3. Users of Smack are encouraged switch to the new 4.3 release family of Smack. The Smack 4.3 API is considered frozen and the API changes between 4.2 and 4.3 are not as significant compared to the changes between Smack 4.1 and 4.2. More information can be found in the Readme of Smack 4.3 (please note that the Readme is work in progress).

Together with the 4.3.0-rc1 release, we have also published the first alpha of Smack 4.4, which includes the updated and improved OMEMO API. Credits for this go to Paul.

As always, all the release artifacts are available on Maven Central.

Posts: 1

Participants: 1

Read full topic

by @Flow Florian Schmaus at June 17, 2018 18:41

June 13, 2018

Monal IM

iOS 3.0.2 is out

I have released 3.0.2 to the iOS App Store.  So far I appear to have resolved the worst crashes.

by Anu at June 13, 2018 12:00

June 12, 2018

Tigase Blog

Tigase services continue to improve working with Amazon Web Services!

Recently, we began utilizing our DualIP variant of our Load Balancing solution (based on see-other-host XMPP semantics) on our servers hosted on AWS and have found out that our load balancing implementation behaves very well with Amazon's ELB environment.

by Daniel at June 12, 2018 23:26

Monal IM

Update on OMEMO

I have an update on the status of OMEMO in Monal. I’ve completed my spike  and have a very rough implementation working. I am able to communicate with Gajim and Chatsecure.  I am actually using a lot of  the same OMEMO code as Chatsecure using Chris’ cocoapods.  The shared code base should reduce duplicated effort and ensure compatibility on the two main Apple platform clients going forward.

The current code isn’t anywhere near production but once I clean it up more, you should start seeing it as an option to turn on in Mac betas in the next month or so.  Below you can see my interactions with Gajim and Chatsecure. 

by Anu at June 12, 2018 02:21

June 11, 2018

Paul Schaub

Summer of Code: Evaluation and Key Lengths

The week of the first evaluation phase is here. This is the fourth week of GSoC – wow, time flew by quite fast this year :)

This week I plan to switch my OX implementation over to PGPainless in order to have a working prototype which can differentiate between sign, crypt and signcrypt elements. This should be pretty straight forward. In case everything goes wrong, I’ll keep the current implementation as a working backup solution, so we should be good to go :)

OpenPGP Key Type Considerations

I spent some time testing my OpenPGP library PGPainless and during testing I noticed, that messages encrypted and signed using keys from the family of elliptic curve cryptography were substantially smaller than messages encrypted with common RSA keys. I knew already, that one benefit of elliptic curve cryptography is, that the keys can be much smaller while providing the same security as RSA keys. But what was new to me is, that this also applies to the length of the resulting message. I did some testing and came to interesting results:

In order to measure the lengths of produced cipher text, I create some code that generates two sets of keys and then encrypts messages of varying lengths. Because OpenPGP for XMPP: Instant Messaging only uses messages that are encrypted and signed, all messages created for my tests are encrypted to, and signed with one key. The size of the plaintext messages ranges from 20 bytes all the way up to 2000 bytes (1000 chars).

Diagram comparing the lengths of ciphertext of different crypto systems

Comparison of Cipher Text Length

The resulting diagram shows, how quickly the size of OpenPGP encrypted messages explodes. Lets assume we want to send the smallest possible OX message to a contact. That message would have a body of less than 20 bytes (less than 10 chars). The body would be encapsulated in a signcrypt-element as specified in XEP-0373. I calculated that the length of that element would be around 250 chars, which make 500 bytes. 500 bytes encrypted and signed using 4096 bit RSA keys makes 1652 bytes ciphertext. That ciphertext is then base64 encoded for transport (a rule of thumb for calculating base64 size is ceil(bytes/3) * 4 which results in 2204 bytes. Those bytes are then encapsulated in an openpgp-element (adds another 94 bytes) which can be appended to a message. All in all the openpgp-element takes up 2298 bytes, compared to a normal body, which would only take up around 46 bytes.

So how do elliptic curves come to the rescue? Lets assume we send the same message again using 256 bit ECC keys on the curve P-256. Again, the length of the signcrypt-element would be 250 chars or 500 bytes in the beginning. OpenPGP encrypting those bytes leads to 804 bytes of ciphertext. Applying base64 encoding results in 1072 bytes, which finally make 1166 bytes of openpgp-element. Around half the size of an RSA encrypted message.

For comparison: I estimated a typical XMPP chat message body to be around 70 characters or 140 bytes based on a database dump of my chat client.

We must not forget however, that the stanza size follows a linear function of the form y = m*x+b, so if the plaintext size grows, the difference between RSA and ECC will become less and less significant.
Looking at the data, I noticed, that applying OpenPGP encryption always added a constant number to the size of the plaintext. Using 256 bit ECC keys only adds around 300 bytes, encrypting a message using 2048 bit RSA keys adds ~500 bytes, while RSA with 4096 bits adds 1140 bytes. The formula for my setup would therefore be y = x + b, where x and y are the size of the message before and after applying encryption and b is the overhead added. This formula doesn’t take base64 encoding into consideration. Also, if multiple participants -> multiple keys are involved, the formula is suspected to underestimate, as the overhead will grow further.

One could argue, that using smaller RSA keys would reduce the stanza size as well, although not as much, but remember, that RSA keys have to be big to be secure. An 3072 bit RSA key provides the same security as an 256 bit ECC key. Quoting Wikipedia:

The NIST recommends 2048-bit keys for RSA. An RSA key length of 3072 bits should be used if security is required beyond 2030.

As a conclusion, I propose to add a paragraph to XEP-0373 suggesting the use of ECC keys to keep the stanza size low.

by vanitasvitae at June 11, 2018 20:41

June 06, 2018

Paul Schaub

Summer of Code: PGPainless 2.0

In previous posts, I mentioned that I forked Bouncy-GPG to create PGPainless, which will be my simple to use OX/OpenPGP API. I have some news regarding that, since I made a radical decision.

I’m not going to fork Bouncy-GPG anymore, but instead write my own OpenPGP library based on BouncyCastle. The new PGPainless will be more suitable for the OX use case. The main reason I did this, was because Bouncy-GPG followed a pattern, where the user would have to know, whether an incoming message was encrypted or signed or both. This pattern does not apply to OX very well, since you don’t know, what content an incoming message has. This was a deliberate decision made by the OX authors to circumvent certain attacks.

Ironically, another reason why I decided to write my own library are Bouncy-GPGs many JUnit tests. I tried to make some changes, which resulted in breaking tests all the time. This might of course be a bad sign, indicating that my changes are bad, but in my case I’m pretty sure, that the tests are just a little bit over oversensitive :) For me it would be less work/more fun to create my own library, than trying to fix Bouncy-GPGs JUnit tests.

The new PGPainless is already capable of generating various OpenPGP keys, encrypting and signing data, as well as decrypting messages. I noticed, that using elliptic curve encryption keys, I was able to reduce the size of (short) messages by a factor of two. So recommending EC keys to implementors might be worth a thought. There is still a little bug in my code which causes signature verification to fail, but I’ll find it – and I’ll kill it.

Today I spent nearly 3 hours debugging a small bug in the decryption code. It turns out, that this code works like I intended,

PGPObjectFactory objectFactory = new PGPObjectFactory(encryptedBytes, fingerprintCalculator);
Object o = objectFactory.nextObject();

while this code does not:

PGPObjectFactory objectFactory = new PGPObjectFactory(encryptedBytes, fingerprintCalculator);
Object o = objectFactory.iterator().next();

The difference is subtle, but apparently deadly.

You can find the new PGPainless on my Gitea instance :)

by vanitasvitae at June 06, 2018 17:21

Erlang Solutions

MongooseIM 3.0.0 - Application turbocharger

MongooseIM 3.0.0 is out and with it come many improvements to our global messaging solution! Over the years we have proven that MongooseIM is the way to go when building a scalable, secure messaging system that never fails. With new features and fixes, our battle tested, highly customisable platform provides an enterprise friendly toolbox everyone can use. Whether you’re an XMPP expert or an entrepreneur looking to bring to life your idea for a community building app, MongooseIM platform helps you build a product tailored to your needs, that will easily grow to match your ambition. Find out what goodies we’ve managed to pack up for you this time and see how we can aid your users’ experience with our truly instant messaging platform.

What is so great about 3.0.0?!

As a team we’ve switched into a faster gear and our latest release is a reflection of that. MongooseIM 3.0.0 is an enterprise ready, stable solution that now works faster then ever; delivering a smooth messaging experience to your users. It features important upgrades that will allow your servers to process even more messages per minute and save memory space for additional user sessions. Everything thanks to a couple of improvements, the new XML parser being the prominent highlight.

Efficiency is not the only reason why we’re so proud of this release. It also features prototype Inbox implementation. It’s an essential extension for virtually every chat application. Our rich experience in this topic allowed us to design a solution that should match most use cases already and will be expended even further!

As usual, there are also other improvements we’d like to share with you. You may find the full list in our changelog but we’ve picked five of them to describe, as we feel you should learn more about them.

Achieve more with the same hardware

Thanks to several important changes and improvements, MongooseIM is now able to process information faster and consume less resources. It means your servers may handle more users and traffic with the MongooseIM upgrade alone.

Depending on your specific application, 25-400% better performance may be expected. Actually, the richer and more complex trafic, the better results you’ll get, compared to previous MongooseIM versions!

Three aspects of MongooseIM have been modified in order to achieve this:

  1. All messages from users are interpreted in a completely new way
  2. More users can connect to a single server per second
  3. All user sessions store as little information as possible when they are idle

Hello inbox

We’ve implemented Inbox features in the past for various projects and the time has come to pour the best ideas and experiences into an extension open for everyone!

A few words of explanation for those not familiar with the Inbox feature. It is the view in a chat application that you see every time you open it. It is a list of all conversations, with excerpts of last messages and unread messages count. Simple as that!

Unfortunately, as there is no official Inbox specification in XMPP yet, we’ve come up with custom protocol (thank you XMPP for your extensiveness!) for this purpose. We’re going to submit it as a XEP (XMPP Extension Protocol) for a review by community but in the meantime you can enjoy its simplicity and intuitiveness. All you need to do is to enable mod_inbox and implement a few simple IQ stanzas in your client application.

Please keep in mind though, this extension is still in experimental stage and will be marked as stable in one of our future releases.

Under the hood

We’ve also added some lower level changes that are going to be useful to developers, CTOs and devops.

Performance bundle: Acceptor pool, session hibernation, new XML parser

We’ve been using expat parser since the very beginning. Recently, our brave C++ warrior in the MongooseIM team thought of replacing it with an alternative - RapidXML. Everything indicated that it might consume less resources if properly used.

And he did use it in an excellent way. The whole C code was rewritten into C++ with the new library integrated. Since RapidXML requires only minimal per-user state (which actually is kept in Erlang terms), much fewer allocations are required and the code is simply cleaner.

What about the performance itself? For smaller stanzas the difference is not drastic, however noticeable. The test involved 100,000 users sending standard, small messages.

Please take a look at CPU usage graphs first:

Fig. 1: MongooseIM 2.2.2 CPU usage over time

Fig. 2: MongooseIM 3.0.0 CPU usage over time

Very similar but new version is a bit better in overall.

Regarding the memory consumption:

Fig. 3: MongooseIM 2.2.2 memory usage over time

Fig. 4: MongooseIM 3.0.0 memory usage over time

Now that’s already impressive. 3.0.0 uses ~2GB of RAM less which is over 25% improvement!

Wait, wait. If that was “impressive” then how do we call this? 1000 users exchanging large, ~36kB messages. CPU goes first again.

Fig. 5: MongooseIM 2.2.2 CPU usage over time

Fig. 6: MongooseIM 3.0.0 CPU usage over time

Well, it’s only 4 times better. :) In terms of memory usage the drop is less significant but still observable.

Fig. 7: MongooseIM 2.2.2 memory usage over time

Fig. 8: MongooseIM 3.0.0 memory usage over time

To wrap it up: all applications will surely benefit from a new parser but it’s especially important for those which process complicated, nested, rich stanzas.

Obviously, rewriting thousands of lines of code in C++ isn’t always the only method of improving performance. Sometimes it’s about small ideas, such as using a pool of acceptors or eager hibernation.

The former involves using more than one process to accept incoming connections from clients. Despite being a fairly cheap operation (it’s only about accepting a connection and creating a client process), this single process became a bottleneck in some applications. In 3.0, 100 acceptors are created by default.

The latter is bit more complicated. For people less familiar with Erlang this might sound similar to Java’s Hibernate framework. Actually, Erlang’s hibernation is not related to database mappings. When a process gets hibernated, its memory is garbage collected and it is removed from a scheduler queue (a bit of a simplification, but you get the idea). We already had such a mechanism in place but the hardcoded timeout for hibernation was 60s. When you think about it, client’s process is idle most of the time, at least in a computers’ world where a single CPU cycle takes less than 1ns. Load tests have proved that hibernating immediately after processing a stanza leads to lower memory usage at minimal cost to CPU time.

The first two graphs show CPU usage in a presence-based test. Most of the time ~6.5k users were connected and sending a presence update every 20 seconds to a roster of 8 friends.

Fig. 9: MongooseIM 2.2.2 CPU usage over time

Fig. 10: MongooseIM 3.0.0 CPU usage over time

Fairly similar I’d say. The second graphs shows memory usage decrease in the same test.

Fig. 11: MongooseIM 2.2.2 memory usage over time

Fig. 12: MongooseIM 3.0.0 memory usage over time

Now, that looks definitely better, right? However, if it turns out that frequent hibernations impact your server’s performance, you may easily tune the timeout in the configuration file (look for the hibernate_after option).

Improved ODBC support

Before 3.0, MongooseIM was using the odbc application from OTP to execute queries via ODBC. Unfortunately, this library is not maintained actively enough to match our requirements. It especially applies to SQL types support. Lucky for us, there is a community-developed repository named eodbc. With its help, MongooseIM’s compatibility with e.g. MSSQL improved significantly. What is more, in order to ensure it, we’ve begun testing ODBC connection with MSSQL on Travis!

A byproduct of this refactor is a completely new escaping API in our RDBMS layer. It’s more intuitive now and much less error prone. Now it’s virtually impossible to use unescaped value in a query and the escaping is always done appropriately for a chosen RDBMS.

Farewell, Message Archive Management v0.2!

Getting rid of MAM 0.2 support means several things, such as easier code maintenance. For example, there were completely separate functions and tests present to handle 0.2 stanzas.

Another important difference is a lack of “archived” element. Please configure MongooseIM to inject “stanza-id” into messages instead.

Despite its importance (MAM 0.2 was the first version supported by MongooseIM), it has become obsolete over time. If your application still uses MAM 0.2, we highly recommend you update your XMPP library and the code using it. Storage backend wise, newer versions are backwards compatible.


Please feel free to read the detailed changelog, where you can find a full list of source code changes and useful links.

What’s next?

After introducing big, important changes over past few releases, we’re going to take our time to polish what we already have.

Above all else, the Inbox feature is going to be expanded. We’re planning to support more backends and introduce new functions (e.g. sorting by timestamp). Also, we’d like to propose a proto XEP to the XMPP community, so the conversations list we’ve design may become an official standard common to clients and other servers. After all, MongooseIM is not our only priority, as we care for the state of XMPP as a communication protocol as well!

We’re slowly heading towards a configuration file revolution. It’s highlight will be a new config format, that will be friendlier for everyone not familiar with Erlang syntax. Currently we’re considering YAML, TOML, Cuttlefish and Conform. What is more, the configuration will undergo a major cleanup and flexibility improvements.

The third item I’d like to share with you applies to everyone working with MongooseIM code. One of the main structures in our server, Mongoose Accumulator (or mongoose_acc) is going to be significantly refactored. Our aim is to make it more intuitive and organised. Its content will be richer, with clear contract and scope. We’re redesigning it as you read these words. :)

Test our work on MongooseIM 3.0 and share your feedback

  1. Help us improve the MongooseIM platform:
  2. Star our repo: esl/MongooseIM
  3. Report issues: esl/MongooseIM/issues
  4. Share your thoughts via Twitter:
  5. Download Docker image with new release
  6. Sign up to our dedicated mailing list to stay up to date about MongooseIM, messaging innovations and industry news.
  7. Check out our MongooseIM product page for more information on the MongooseIM platform.

June 06, 2018 10:52

Jérôme Poisson

Decentralized code forge, based on XMPP

With the recent announcement concerning the biggest known centralized code forge owner change, we have seen back here and there discussions about the creation of a similar tool, but decentralized.

I've used this occasion to recall the work done to implement tickets and merge requests in Salut à Toi (SàT), work which was relatively unoticed at the time of writing, about 6 months ago.

Now, I would like to bring some details on why building those tools.

First of all, why not the big forge? After all, a good part of current libre software is already using it! Well first it's not libre, and we commited ourself in our social contract to use libre software as much as possible, infrastructure included. Then because it's centralized, and there too our social contract is pretty clear, even if it's not as important for infrastructure as it is for SàT itself. Finally, because we are currently using Mercurial, and the most famous forge is build around Git.
We do not hide the fact that we already ask ourselves wether to use this platform or not in general assemblee (cf. minutes – in French –), we were mainly interested in the great visibility it can offer.

« It's centralized? But "Git" is decentralized! » is a point we are ofter hearing and it's a true fact, Git (and Mercurial, and some others) is decentralized. But a code forge is not the version control system, it's all the tools arount it: hosting, tickets, merge/pull requests, comments, wikis, etc. And those tools are not decentralized at the moment, and even if they are often usable throught a proprietary API, they are still under centralization rules, i.e. rules of the hosting service (and its technical hazards). This also means that if the service doesn't want a project, it can refuse, delete, or block it.

Centralization is also a technical facility to catalog and search project… which are on the service. Any external attempt will then have more difficulties to be visible and to attract contributors/users/help. This is a situation we know very well with Salut à Toi (we are not present on proprietary and centralized "social networks" for the same reasons), and we find it unacceptable. It goes without saying that concentrating projects on a single platform is the best way to contribute and exacerbate this state of affairs.
Please note, however, that we are not judging or attacking people and projects who made different choices. These positions are linked to our political commitment.

Why, then, not using existing Libre projects, already advanced and working, like Gitlab? Well, first because we are working with Mercurial and not Git, and secondly because we would put ourselves here too in a centralized solution. And there is an other point: there are nearly no decentralized forges (Fossil maybe?), and we already have nearly everything we need with SàT and XMPP. And let's add that there is some pleasure to build the tools we are lacking.

SàT is on the way to be a complete ecosystem, offering most, if not all, the tools needed to organise and communicate. But it is also generic and re-usable. That's why the "merge requests" system is not linked to a specific SCM (Git or Mercurial), it can be used with other software, and it is actually not only usable for code development. It's a component which will be used where it is useful.

To conclude this post, I would like to remind that if we want to see a decentralized, ethical and politically commited alternative to build our code, organise ourself, and communicate, we can make this real by cooperating and contributing, being with code, design, translations, documentation, testing, etc.
We got recently some help for packaging on Arch (thanks jnanar and previous contributors), and there are continuous efforts for packaging in Debian (thanks Robotux, Naha, Debacle, and other Debian XMPP packagers), if you can participate, please contact us (see our official website), together we can make the difference.
If you are lacking time, you can support us as well on Liberapay: Thanks in advance!

by goffi at June 06, 2018 05:50

Monal IM

iOS 3.0.2 and OSX 2.1.2 betas out

I am still cleaning up all of the issues people have seen (and some old friends) in the latest releases. There are new betas out.  I will looking for feedback and crash reports. I hope to have the next updates out this week.   I know it has been almost weekly releases since the 3.0 release. Hoping to slow down to a more manageable  release cycle after the code is more stable.

by Anu at June 06, 2018 02:51

June 04, 2018

Tigase Blog

June 01, 2018

Paul Schaub

Summer of Code: Command Line OX Client!

As I stated earlier, I am working on a small XMPP command line test client, which is capable of sending and receiving OpenPGP encrypted messages. I just published a first version :)

Creating command line clients with Smack is super easy. You basically just create a connection, instantiate the manager classes of features you want to use and create some kind of read-execute-print-loop.
Last year I demonstrated how to create an OMEMO-capable client in 200 lines of code. The new client follows pretty much the same scheme.

The client offers some basic features like adding contacts to the roster, as well as obviously OX related features like displaying fingerprints, generation, restoration and backup of key pairs and of course encryption and decryption of messages. Note that up to this point I haven’t implemented any form of trust management. For now, my implementation considers all keys whose fingerprints are published in the metadata node as trusted.

You can find the client here. Feel free to try it out, instructions on how to build it are also found in the repository.

Happy Hacking!

by vanitasvitae at June 01, 2018 15:02

Jérôme Poisson

File sharing landing in next release of Salut à Toi

Last big feature before the preparation of alpha release, file sharing is now available for Salut à Toi.

SàT has been able to send or receive files for years, either directly when 2 people are connecting at the same time, or via an HTTP upload on the server. It is now possible to share a file hierarchy, or in other words one or several directories. There are 2 main uses cases: using a component, or a client.

sharing a directory with Cagou

Sharing directory with client

The first way to use file sharing is from device to device. It can be used, for instance, to share pictures taken from your phone with your desktop computer, or to quickly give access to discussion papers to your coworkers. To handle permissions, you just have to give the JIDs (XMPP identifiers) of allowed people.

The transfer is using Jingle technology, which will choose the best way to send the file. That means that if you are on the same local network (e.g. the previous case of sharing your phone picture with desktop computer, when you're at home), the connection will stay local, and the server will only see the signal (the data needed to establish the connection).

But if your devices are not on the same local area network, connection is still doable, and it will try to be direct when possible.

file sharing with a client

Above you can see how easy it is to share a directory with Cagou, the desktop/Android frontend of Salut à Toi.

File sharing component

SàT can now act as a component (which is more or less a generic server plugin), and a first one allows a user to upload, list and retrieve files.

This is really handy when you want to keep some files private for later use (and access it from any device), or to share a photo album, for instance, with your family.

This is on the way to a service similar to "cloud storage", except that you may keep control on your data.

file sharing with a component

As you can see, it's pretty similar to the workflow with client.

With the invitation system now available in SàT, you can even share with people without account.

Some notes

File transfer is currently unencrypted, but encryption is planed soon, either with OX (OpenPGP) or OMEMO.
The base feature is there and working, but some improvements are planed at more or less short term: quotas, files synchronization, e2e encryption, advanced search.


You'll find instruction on how to use this feature on the wiki.

Of course you'll need to use development version, don't hesitate to ask for help on SàT room : (or via browser).

A package is now available for Cagou on AUR for Arch Linux, thanks to jnanar.

Help needed!

SàT is a huge project, with a strong ethical root. It's unique in many ways, and needs a lot of work. You may help its success either by supporting us on Liberapay or by contributing (check official website or join our room for details).

Next post will be about alpha release, stay connected ;)

by goffi at June 01, 2018 10:26

May 31, 2018

The XMPP Standards Foundation

The XMPP Newsletter, 1 June 2018

Welcome to another edition of the XMPP newsletter.


The next release of Salut à Toi will include file sharing. Sharing files directly between two users has been possible before, but now it's possible to share a file hierarchy, or in other words one or several directories. To share with someone, just use their XMPP address (JID).

Christopher Muclumbus is a new project to publicly list XMPP chat rooms. It provides a web interface with full-text search for room names, descriptions and addresses. Only rooms which are configured to be publicly listed are shown.

Blazemeter have written about 5 ways to load test chat plugins with JMeter. JMeter is an open source Java application designed to load test functional behavior and measure performance. The article briefly discusses XMPP and mentions that JMeter has a plugin which provides support for XMPP.

Fanout has removed XMPP support from their cloud offering. It was never used at scale, and was mainly relegated to chat bots that needed compatibility with Google Talk. However, as Google phased out XMPP federation, such usefulness dwindled.

JC Brand wrote a blogpost about the Gulaschprogrammiernacht a hacker/maker event in Karlsruhe, where six people worked on XMPP-related projects and plans where hatched to organize a sprint in Cambridge.

Google Summer of Code Projects

Paul Schaub has been blogging regularly in May about his Google Summer of Code project, adding OpenPGP support for SMACK (XEP-0373 and XEP-0374).

Rishi Raj wrote a blogpost about their GSoC project, an XMPP Compliance Tester.


An XMPP sprint is being organized for August in Cambridge. The event will take place in the Collabora offices and dates are still to be finalized. You still have a chance to vote for your preferred date and to suggest topics for the sprint. Visit and join the groupchat


On 25th May the European Union's "General Data Protection Regulation" has come into force.

The GDPR has had some people spooked, for example, in an alarmist post Monal has announcing its withdrawal from the EU.

The XMPP community has been discussing the GDPR's impact and have come up with various ideas and strategies on how to ensure compliance.

Discussions happened regularly in the groupchat.

There is a GDPR page on the XMPP wiki which collects and summarises the results of these discussions.

Winfried Tilanus, who's been active in these discussions, created a new proto-XEP: Best practices for GDPR compliant deployment of XMPP

Software releases



  • Escalus 4.0.0: originally created as a tool to test XMPP servers, it can also be used as a standalone Erlang application. Changes include a new XML viewer, a new XML parser and stanza pipelining.

  • Monal iOS 3.0.0 and then 3.0.1, have been released. The major release adds support for push notifications, a new UI layout, iPhone X support, multi-user chat improvements, conversation synchronization and more.

  • Monal Mac 2.1.1


  • Smack 4.3.0-beta2: This release marks an important milestone in Smack’s development cycle as the ‘4.3’ branch was created, which means there are no major API changes to be expected.

  • Strophe.js 1.2.15

by jcbrand at May 31, 2018 22:00

JC Brand

2018 Gulaschprogrammiernacht and organizing sprints for XMPP

Recently I attended the Gulaschprogrammiernacht for the first time.

It's a hacker/maker event in the Zentrum für Kunst und Medien (Centre for Arts and Media) in Karlsruhe, Germany.

AFAIK it's organized by the local chapter of the infamous Chaos Computer Club.

I heard about it from Daniel Gultsch on Twitter. It sounded like fun, so I decided to attend and spend the time adding OMEMO support to Converse.

Guus der Kinderen and I intended to organize an XMPP sprint for that weekend in Düsseldorf, but we were cutting it a bit fine with the organization, so I hoped that we could just shift the whole sprint to GPN.

Unfortunately Guus couldn't attend, but Daniel and Maxime Buquet (pep) did and I spent most of the event hanging out with them and working on XMPP-related stuff. The developers behind the Dino XMPP client also attended and hung out with us for a while and there was someone working on writing an XMPP connector for Empathy in C++.

XMPP hackers at Gulaschprogrammiernacht

Maxime worked on adding OMEMO support, to Poezio, and Daniel provided us with know-how and moral support. Daniel worked mainly on the Conversations Push Proxy.

We had some discussions around the value of holding regular sprints and I told them about my experience with sprints in the Plone community.

The Plone community regulary organizes sprints and they've been invaluable in getting difficult work done that no single company could or would sponsor internally. To me it's a beautiful example of what's been termed Commons-based peer production.

The non-profit Plone foundation provides funding and an official seal of approval to these sprints, and usually a sprint has a particular focus (such as adding Python3 support). Sprints can range from 3 people to 30 or more.

One difference between the Plone and XMPP communities, is that Plone is a single open source product on which multiple companies and developers build their businesses, whereas XMPP is a standardized protocol upon which multiple companies and developers create multiple products, some open source and some closed source.

In both cases however there is a single commons which community members have an incentive to maintain and improve as they build their businesses around it.

Another difference is between the Plone Foundation and the XMPP Standards Foundation. The XSF, for better or worse, interprets its role and function fairly strictly as being a standards organisation primarily focused on standardising extensions to XMPP, and less on community building or supporting software development.

Despite these differences, I still consider sprints a great way to foster community and to improve the extent and quality of XMPP-related software and documentation.

There is an interesting dynamic between cooperation and competition in both the Plone and XMPP communities. Participants compete with one another but they also have the shared goal of maintaining and growing a healthy software ecosystem.

Maxime was particularly excited by our discussion and very quickly put word into action by planning and announcing an XMPP sprint in Cambridge, UK in August.

There's still time to vote on the date of the sprint and to suggest topics.

Hopefully this will be first of many more sprints and communit events.

Unmanned laptops at the Gulaschprogrammiernacht

May 31, 2018 17:30

Paul Schaub

Summer of Code: Polishing the API

The third week of coding is nearing its end and I’m quite happy with how my project turned out so far.

The last two days I was ill, so I haven’t got anything done during that period, but since I started my work ahead of time during the boding period, I think I can compensate for that :) .
Anyway, this week I created a second Manager class as another entry point to the API. This one is specifically targeted at the Instant Messaging use-case of XEP-0374. It provides methods to easily start encrypted chats with contacts and register listeners for incoming chat messages.

I’m still not 100% pleased by how I’m handling exceptions. PGPainless so far only throws a single type of exception, which might make it hard to determine, what exactly went wrong. This is something I have to change in the future.

Another thing that bothers me about PGPainless is the fact, that I have to know, how an OpenPGP message is constructed in order to process it. I have to know, that a message is encrypted and signed to then decrypt and verify it.
XEP-0373 does not specify some kind of marker that says “the following message is encrypted and signed” which is a design decision which was made in order to counter certain types of attacks. So I have to modify PGPainless to provide a method that can process arbitrary OpenPGP messages and which tells me afterwards, whether the messages was signed and so on.

Compared to last years project I spent way more time on documenting my code this time. Nearly every public method has a beautiful green block of javadoc above its signature documenting what it does and how it should be used.
What I could do better though are tests. Last year my focus was on creating good JUnit and integration tests, while this time I only have the bare minimum of tests. I’ll try to go through my API together with Florian next week to find rough edges and afterwards create some more tests.

Happy Hacking!

by vanitasvitae at May 31, 2018 15:16

Monal IM

iOS 3.0.1 Released, How is Push?

The patch release is out.  Search is restored and stability should be better.

While I am asking, how has push been in the latest clients? I have seen thousands of devices registered but I haven’t gotten a ton of feedback on how it has worked.

If you don’t have it working yet, you need an XEP-0357 module on your server.

Prosody: Cloud notify  

Ejabberd: mod push 

Tigase: Push Component

Openfire: Does not support  XEP-0357  push notifications

Mongoose IM: mod_event_pusher_push


by Anu at May 31, 2018 13:36

Mac 2.1.1 out

The first patch update for 2.1 has been released to the App Store. iOS is waiting approval and should be available in a few hours.

by Anu at May 31, 2018 02:45

May 29, 2018

Monal IM

New Mac and iOS betas

There are new Mac and iOS betas out. I am hoping this is the start of these two clients being 100% in sync.  This update resolves many of the stability issues in the last release and restores searching to the iOS client.

by Anu at May 29, 2018 16:00

May 28, 2018

Monal IM

iOS Search Works Again

I appear to have forgotten to add search in the UI on the refactor. It has been re-added and has full iOS 11 and iPhone X support. 

by Anu at May 28, 2018 14:44

May 27, 2018

Monal IM

iOS Crashes

As with any big update there are going to be bugs. I know 3.0 is not as stable as the last release. I am working quickly to fix every crash I see. So far the following have been fixed and will be updated next week . It is a long weekend in the US so these will likely come in by Wednesday. Sorry for the problems, know that I am fixing them ASAP. Things fixed so far:

  1. Crash when trying to save an account with no server has been replaced with an error message.
  2. Crash on iPads when retrying messages
  3. Crash on iPads when deleting account
  4. Crash on fetching message history
  5. Crash sometimes when receiving messages
  6. Crash sometimes when logging in

by Anu at May 27, 2018 18:27

May 26, 2018

Monal IM

Updates and GDPR

I am looking at the stats coming back and I see one particular crash that I would like to resolve ASAP in both iOS and Mac clients.  There will be an update next week after that unless something pressing comes up, development will pause as I sort through GDPR.  You may already have  noticed the cookie banner that appears on this page courtesy the wp-stats package I am using. The reason for XMPP work stopping: GDPR work is more work and one person only has so much free time in a day.

General GDPR roadmap:

  1. Site (done)
  2. Crashlyitcs
  3. Mac
  4. Push server
  5. iOS

by Anu at May 26, 2018 00:07

May 25, 2018

Paul Schaub

Summer of Code: Advancing the prototype

It has been a week since my last blog post, so it is time for an update.

I successfully tested my OX client against an experimental Gajim plugin written by Philip Hörist. Big thanks for his help during the testing :)

My implementation can now backup the users secret key in a private PubSub node, as well as restore it from there. This was vastly useful during testing, as I don’t have a persistent store implementation yet.
My next steps will be to implement a solution to persisting keys, as well as some kind of trust management. Florian suggested to implement the TOFU (trust on first use) trust model.

PGPainless has a key selection strategy which selects keys based on the UID. I will have to change this to use key fingerprints instead, as I noticed that a user mallory@malware.sys could publish a key with her own uid, as well as the uid of juliet@capulet.lit. In that case my implementation would encrypt the message to mallorys key as well, as it also has juliets uid. Going with fingerprints instead makes the system more secure.

XEP-0373 had some typos and was missing some examples, for which I submitted fixes. One change I made is a breaking change, so we have to see, whether it will be merged in the next days, or delayed to be merged together with later breaking modifications.

That’s it for now :)

Happy Hacking!

by vanitasvitae at May 25, 2018 14:57

May 24, 2018


Exception management at the heart of artificial intelligence performance

A process integrating artificial intelligence inevitably generates a residual error rate, which needs to be accepted as a normal operation mode. Only Human can provide a suited response to these exceptions.

Science fiction makes us dream about the promises of artificial intelligence, but it creates a mythology, which makes it difficult to understand the real issues at stake. Yet, there’s no magic ingredient in AI, but rather a lot of mathematics.

Thus, to develop an application using automated training, you need to go through two major phases:

  • The first one, designed and led by scientists, consists in creating a mathematical model to solve a specific problem, using a set of algorithms which are often conventional.
  • The second is an inference phase, which consists in using this model to perform the analysis on new data.

For instance, you can create a model from medical images of healthy and sick people. This model is then used to diagnose a specific disease, using pre-existing data from medical images of new patients. This model will need to evolve regularly, to take into account the new data, and thus adjust the treatment accordingly.

AI is not just about algorithms and data: it’s an industrial, as well as a basic-research challenge

AI does not provide a complete intelligence, but hyper-specialized functions. To use a medical metaphor again, it only offers brain pieces, not an entire brain.

Besides, one of AI’s challenges lies in its capacity to make the models evolve rapidly, using new data and algorithms, without damaging the diagnosis.

Therefore, the value of intelligent applications lies in the collaboration between the models: AI needs a nervous system allowing components to exchange signals, but also to involve external elements.

That’s why mathematical models represent only one part of the intelligent systems. On an industrial scale, real-time data exchange is also vital to increase fluidity between modeling and inference, and to interconnect the functions.

Real-time communication is therefore the hidden part of the iceberg. And this is also where Human and AI can collaborate to deal with unusual or very complex situations.

Exception processing, at the heart of AI

The world is not completely predictable. The chaos theory, illustrated by the Butterfly Effect, shows that minute differences in the initial conditions can cause seemingly unpredictable effects as the outcome of a complex causal chain.

This same phenomenon explains that AI models, just like weather models, will tend towards, but without ever reaching it, a 100% accuracy rate. The residual error rate they generate is inherent to their operation.

Consequently, it becomes less important to invest increasing resources to gain a few accuracy decimals, than to process errors in the best conditions.

This exception processing is at the heart of any process, from its definition to the way the organizations operate.

Exception processing at the heart of professional excellence

Operational excellence in the profession doesn’t solely lie in the capacity to address the simplest nominal cases. Companies stand out through their ability to manage, in good conditions, the situations which deviate from normal circumstances.

By placing the customer experience at the core of its vision, Amazon excels and gains new market shares, through its ability to address business contingencies: late delivery, non-compliant products, etc.

Therefore, exception processing is indeed what creates customer value, the sustainability of a brand, and the reputation of a company. In the end, the nominal process is lowly differentiating, unlike the capacity to cope with crisis situations.

Human is needed to process exception properly

Therefore, AI’s challenge remains the processing of anomalies through human-machine collaboration, which leverages their respective strengths.

In this perspective, chats and virtual assistants become natural interfaces for intelligent applications. Tomorrow, they will be at the core of human-machine collaboration, like a privileged channel allowing to notify the spotted anomalies to the managers of a system, but also to address them, by giving instructions to the machine.

A systemic approach for AI

What we call AI today is a programming technique of intelligent applications, which should be linked to a systemic logic, to have a positive impact on our society.

In this logic, AI is fallible. It relieves Human of certain repetitive tasks, but gives him stronger control and decision-making powers over exceptions. An AI company can only operate if exceptions are managed with humanity and efficiency, by imagining analysis tools, processes, and by integrating the role of AI in our education, to see beyond the algorithms.

by Mickaël Rémond at May 24, 2018 19:39


Updated Policies

As users of a service, you have a right to know which data the service is storing about you and how it is using that data. Starting tomorrow, this right will become law in the European Union as the General Data Protection Regulation (GDPR).

In the last months, we have worked out how this affects the Jabber ecosystem. This work has resulted in the creation of new service policies for the service, which become effective today.

Privacy Policy

Our Privacy Policy describes in detail what information is stored about you, how long it is stored and how it is shared with other parties.

We are only storing and processing your information to:

  • provide the service you explicitly want (send and receive messages and files, participate in chatrooms etc.)
  • fight Jabber spam (this is a huge problem!)
  • ensure that our server operates properly

We do not create profiles of our users or try to monetize the data in any way.

Terms of Service

Our Terms of Service have moved to their own page, and we have made more explicit that you may not use our server for illegal things or to send spam.

Other Changes

To improve our (your!) data footprint, we are disabling the following functions:

  • Disqus comments (they used to be nice a loooong time ago).
  • Direct buttons for Twitter and flattr.
  • Infinite storage of uploaded user files.

May 24, 2018 18:48