Planet Jabber

December 08, 2019

Monal IM

Mac Catalyst Build

If you have macOS catalina and are interested in trying a VERY early build of Monal, you can get it here. This is literally the iOS client. Things to note for now, hitting the x button closes the app, OMEMO doesnt work yet. I will start brining more mac code into this soon (menu bars, tool bars etc). This effort will likely improve the iOS client as well.

by Anu at December 08, 2019 18:12

Peter Saint-Andre

Aristotle Research Report #12: A Eudemian Thread

At the very end of Aristotle's Eudemian Ethics there is a puzzling passage about the standard (ὅρος) according to which the "good and beautifully right" person (καλοκαγαθός) makes choices with regard to natural goods such as wealth, health, strength, honor, power, good fortune, and personal relationships. Surprisingly, he says that whatever conduces to the cultivation (θεραπεία) and awareness (θεωρία) of god is best and is the most beautifully right standard, whereas anything that hinders such cultivation and awareness is unworthy, trifling, and bad (φαῦλος - the opposite of what is σπουδαῖος, i.e. worthy, serious, and good). Until these last few paragraphs, nothing in the Eudemian Ethics had obviously pointed in this startlingly "theological" direction or had advocated a godlike, superhuman existence; yet if we pay careful attention to the text, we can patiently unravel a thread of inquiry that elucidates Aristotle's last word on the topic of the best human life....

December 08, 2019 00:00

December 07, 2019

Monal IM

Refactoring, bug fixes, testing and dropbox

I have been refactoring a lot of code to address long standing bugs this will introduce some regressions but i hoe to catch all of them in TestFlight before release. Additionally, the code should facilitate better testing. You may notice me adding Travis CI and the beginnings of unit/integration testing on every commit. I am also removing Dropbox integration. It is half a decade old and predates HTTP uploading and largely duplicates the functionality.

by Anu at December 07, 2019 14:41

December 05, 2019

Tigase Blog

BeagleIM 3.4 and Siskin IM 5.4 released

New versions of XMPP clients for Apple’s mobile and desktop platforms have been released.

BeagleIM 3.4

The stable release of BeagleIM 3.4 contains following changes

  • Added support for setting MUC room avatar

beageim-room-avatar

  • Simplified MUC room settings window

beageim-room-configuration

  • Fixed an issue with establishing VoIP connections
  • Fixed an issue with possible wrong order of messages received at the same time from the same entity
  • Fixed issue with single quote to apostrophe automatic replacement in XML console
  • Added timeout for presenting “composing” animation
  • Fixed an issue with “Mute notifications” option not being visible

SiskinIM 5.4

This version adds support for setting MUC room avatar.

You can download both application from their respective app-stores: Beagle IM from macOS appstore and Siskin IM from iOS appstore and star them on GitHub: Siskin IM on GitHub and Beagle IM on GitHub

December 05, 2019 12:40

Tigase XMPP Client Apps

Our XMPP Chat Apps philosophy

Web based, JavaScript, React and so on app are great… for developers.

We do care about users and we understand that the only way to provide users with great experience is through native apps.

Therefore we have put a lot of effort and dedication to develop native client for each platform separately. Each of our apps is tailored for the best experience and native feeling. Plus they are optimized for each platform, so they are lightweight but also powerful and take full advantage of what is offered by the environment they are running on.

All our applications offer the same set of features, so no need to replace them over and over again below. Here is the list:

  • Simple Chat - yes, this is the good, old 1-1 chat.
  • Group Chat - like the old IRC, now it is MUC (Multi User Chat). You can create chat rooms, public or private, open or password protected with moderators and so on…
  • Push notifications - if the app is not running on the device, the user is not connected to the XMPP server but he can still receive notifications about new messages from people.
  • iOS has now call silencing from unknown. We had this before them. All new chats from unknown users go to separate tab “From unknown” and you can turn off push notifications about messages from people who are on on your contact list. Plus, of course Tigase XMPP Server has a built-in anti-spam filtering which helps too.
  • Voice and Video calls are pretty much standard nowadays and Tigase client support it as well.
  • Multi-account support - you can add as many accounts on different servers as you want on your client and communicate through all these accounts at the same time
  • Files Sharing - yes, photos, documents, anything can be send through the XMPP client to your buddies either on the simple 1-1 chat or to entire team in a group chat. Client displays photos nicely, so you can see them directly in the app.
  • OMEMO - E2E encryption is available on all our client apps.

We, at Tigase use all our XMPP apps ourselves.

All Open Source

All our XMPP Chat applications are open source with code available in public repositories on GitHub.

Stork IM - Tigase Android XMPP Client

Our first mobile client we created. Native Android app designed and written from ground up, again and again…

We experimented, made mistakes and learned. So here it is. Android Java, native app. Lightweight, fast and powerful.

Our Android client works on most Android devices. It offers a set of typical features you would expect from a chat application plus a lot more, not typical features.

Siskin IM - Tigase iOS XMPP Client

Our second mobile client. This one for iOS, optimized to run on phones and tablets.

It is a native Swift app optimized for iOS for both phones and tablets.

Simple to use but with many advanced options for more demanding users.

We suggest to start using it in a simple mode and gradually explore other features and options.

Beagle IM - Tigase MacOS XMPP Client

Mobile devices are good when you are on the go. But we are software developers and we work on real computers all the time. Hence we also have and offer a real desktop, native chat client.

Again, it’s a native Swift app designed from ground up and optimized for desktop MacOS.

Feature set matches all other other apps.

If you work on MacOS, we honestly recommend to try it out.

December 05, 2019 01:08

Tigase XMPP Libraries

Our software philosophy

Actually nothing new and nothing surprising here. We want to have as much of a reusable code as possible. And this reusable code should have a simple but powerful API to be useful for quickly creating software.

That’s it.

And this is how we design and develop our XMPP libraries. Check them out.

Documentation to all our projects is available online and sample codes? Take a look at our XMPP Chat apps which are open source too.

December 05, 2019 01:08

Tigase XMPP Server

Tigase XMPP Server is Java based software

Tigase XMPP Server is a standalone application written in Java. It is not a “web server” system. It runs independently from any other software. In most cases all it needs to run is Java Virtual Machine (JVM). For extended functionality it may require a few external libraries for the most part it is all in-house developed software.

Java based but still very efficient

Java is known and infamous for it’s high resource requirements and slowness. This unfortunate, bad reputation is a result of early impressions from the first years of Java and also from poorly written, bloated Java monster software. Poorly written and poorly maintained software results in tons of redundand code and overall slagishness.

There are, however, many Java programs which are good examples how efficient, fast and resources friendly Java code can be. And Tigase XMPP Server is one of these good examples.

We put a lot of effort to optimize, design it and implement efficient code. Here are some interesting facts:

  • The main binary code to run Tigase XMPP Server is less then 3MB
  • In some cases it can be run with as little as 10MB of RAM, usable, typical XMPP chat system can be deployed on 50MB of RAM
  • It was successfully tested to handle over 30 millions messages per second
  • It runs on production systems with over 10 million users
  • It runs on production systems processing over 5 millions messages per second
  • Typical message processing time is below 0.01 second if database is not involved

Reliable

We frequently put Tigase XMPP Server through very rigorous testing. Running hundreds of automated tests, performance tests and long-lasting reliability tests. This allows us to discover bugs, inconsistencies, bottlenecks, memory leaks and other potential problems in long-running applications.

Every release is thoroughly tested and verified before publication.

Tigase XMPP Server is known to run for over 3 years without restart on a production system.

Secure

XMPP was designed from ground up to be secure. Tigase, however, does not stop there. We took additional steps to make sure Tigase provides up to date security.

Through extensive testing, third-party verification, we make sure it is a well written software, resistant to all common attacks, including SQL injection, DOS attacks, man-in-the-middle attacks and many others.

We closely track changes and developments in the security protocols and make sure Tigase is up to date, uses only safe ciphers and algorithms.

Additional, hardened mode, turns Tigase into very restrictive configuration, which may break connectivity with older apps and servers, but on the other hand, ensures that security it tightest possible for demanding customers.

Very Scalable

Tigase uses resources very efficiently. It can easily handle half a million users on a single server or more. But no matter how efficient the server is and how optimized the software is, there is a limit on how much a single server can handle.

Therefore, from the very beginning we planned on making Tigase scalable. Out of the box Tigase offers near-linear scalability or exact linear for some use cases.

It can be deployed on large number of servers over distributed data centers and cloud providers to provide a single logical system for practically unlimited number of online users sending millions of messages per second.

Cloud independent

Tigase XMPP Server is Java application and can be deployed on anything that can run Java programs. It does have some special integration features for Amazon AWS cloud system but it can run on any Cloud. Our customers deploy Tigase on Google Cloud, Microsoft Azure cloud and many others and also on in-house dedicated data centers.

Tigase has a built-in load balancer to better distribute connected users and devices but it can also play nicely with external load balancers which are used on different environments.

Extensible

Tigase XMPP Server can be used as it is.

Out of the box it is capable to provide sufficient functions for typical XMPP systems and in many cases for not so standard XMPP services.

There are, however, deployments with specific requirements or third-party systems with which Tigase has to integrate. For such cases, Tigase XMPP Server offers exceptional flexibility. Well designed and rich API allows adding custom elements like blocks.

There is no single line of code in Tigase which is fixed. Anything and everything can be replaced with custom made code and plugged-in through configuration file.

Administrator friendly

From our experience we know that starting a complex system is a big challenge. However, even greater challenge is maintaining such a system long-term. Therefore, we have put a lot of effort to make sys ops life easier.

There is a huge number of tools built-into the Tigase XMPP Server which make maintaining Tigase much simpler than expected:

  • Command line tool to execute all admin tasks
  • Web UI for admin to see critical system parameters and performance metrics
  • Thousands of runtime performance metrics allow to diagnose system in real-time
  • Built-in self-monitoring system which can send notifications via email or XMPP if it detects problems
  • Detailed diagnostic log can be switched on/off
  • Detailed diagnostic log for a single user can be switched on/off
  • Audit Log
  • Self-fault recovery
  • Automatic cluster reconfiguration

Easy to track performance

Proper monitoring is one of key areas we focus during development, testing and maintaining services. Tigase XMPP Server offers thousands of run-time performance metrics, which allow to track the system in real-time.

Every significant processing unit generates performance metrics, therefore if there is any slow down or a bottleneck it is very easy to diagnose the system, locate the problem and fix it.

Easy to integrate

There are many ways to integrate third-party systems with Tigase XMPP Server.

It has very well thought and rich API which allows to add new components and plugins. These plugins can interact with other systems to exchange information.

However, Tigase employs a common pattern for so called “Connection Managers” which are responsible for network communication. Each connection manager talks a different protocol and Tigase can easily learn new protocols to connect to virtually any external service to exchange information in real-time.

Tigase also offer access through REST API which can be easily extended using various scripting languages. This is a powerful feature which allows to add new REST API calls using a programming language of your choice.

Tigase XMPP Server can be also configured to retrieve users’ data from different databases storing data in different formats. This allows for an easy integration with other systems without writing a single line of code.

December 05, 2019 01:08

Tigase Instant Communication, Presence and Messaging

What is “Instant Communication”

First things first. What is this all about?

We say this is “Instant communication” or “Near real-time communication” and indeed, this is about communicating, talking, sending messages, sending other information, documents. Instant or real-time means, whatever you send, is sent right away, it is also delivered right away.

Would the receiving person get it right away too? Well, it depends, if the person is online, it gets it right away and can respond right away.

Messaging really means chatting, talking. It’s not just sending and receiving messages. You send a message, friend receives it in real-time and can respond right away. You see the full chat history, context, you just talk. And you can chat with many people at the same time, in what we call group chat rooms. It’s like sitting at the table with friends and talking to them.

What special about this system is, that You know if your friends are online. If you send a message to online friend you can expect his response right away, if he is offline, you know about it and you know you may have to wait for a response. No guessing. This is the “Presence” part in the title. Presence is just a status of the other person: online, offline, busy, away, and so on… So you not only can send a message to your friend instantly but also can know his current status, also in real-time. As soon as somebody changes his status, you know it right away.

Presence is also much more than just online status. Presence can optionally carry on additional information, like location, mood, what your friends are listening to and just anything your friend chooses to share with you.

And… “last but not least”, the system is not just for people talking. It’s for devices as well. Anything that can send some information, share some data, update it’s status can effectively use our software. IoT is an ideal example where our software excels and shows it’s full power.

How is it different from e-mail?

Simple enough. It all looks similar to email, send and receive messages. What’s more, even a user address looks exactly like email. So what is the difference?

There are a few significant differences:

  1. E-mail is not real-time and is not instant. It may be quite fast but it may also be quite slow (a couple of minutes) until the email is actually delivered and this is still considered a norm for email messages.

    XMPP is actually near real-time and instant. Typical delivery time is way below 1 second.

  2. E-mail is not really for chatting or talking. It’s more like sending letters, longer texts. It’s not really suitable for sending short messages or notifications.

    XMPP is just for that. Chatting, talking, sending short messages or notifications. However our software has expanded on the basic features and allows rich text formatting using Markdown language. You can send long texts and even letters nicely formatter which are pleasant to read.

  3. E-mail has no presence information. You send an email message but you do not know whether your friend is online, when he gets the message, when he can read the message and finally respond. You just send an email and wait.

    XMPP does have presence information. Plus all kinds of confirmations built-in. You know if your friend is online, when he received the message, read it and you know when to expect a response. You know whether your friend is available to talk right now or busy doing something.

  4. E-mail was designed and created very long time ago. When the high security and privacy was not such a big concern, there was no spam, and other attacks. Over time security of email improved but there are many different techniques and standards not always adopted by every email provider. Spam has been a huge problem for a long time and so far nobody knows how to solve it.

    XMPP came to be long time after e-mail. When all the email weaknesses and problems were well known. So it was designed from ground up to solve the problems. Security is embedded in the XMPP core, privacy was the main concern and preventing Spam and DOS attacks was taken into consideration from the very beginning.

How is it different from SMS / Text Messages?

SMS / Text messages are instant, aren’t they? They are sent and delivered in real-time, aren’t they?

At first, it all sounds like SMS / Texting. People chat over SMS all the time. Is XMPP any different.

There are a few significant differences:

  1. Presence - is completely missing from SMS/Texting. You have no idea whether the person is at their device to read the message and text you back. You are sure, that he gets the message, usually, right away, unless their device is turned off. But you have no way of knowing if the device is on or off, whether your friend is close by to the device, and not busy to respond.
  2. User address/ID - for SMS / Texting, this is just a phone number. Sure, nowadays it is kind of personal thing but if it changes, then friends may have problem finding out your new number, may have problem contacting you at all. So you have to take a good care of letting them know about the phone number change. But even if you have still your number and poeple can text you, the device may be far on the table when you rest on the coach with your tablet. To read a text from a friend or send somebody SMS you would have to interrupt your rest, find your phone and type the message on the screen. Don’t mention about all your chat history. When your mobile is gone, all the SMSes / Texts are gone too.

    With XMPP, this problem does not exist. You can have multiple applications connected to your one user address and can chat with friends using whatever device you have handy with you. And all your friends will always recognize you as you. And you can choose to store your chat history on the server and you can see it on any devices and app you connect with.

  3. Chat feedback. With SMS / Text you send a message and… wait. In XMPP, you send a message, you see when it was delivered, you also see when the friend read it and finally you can even see when the friend starts typing response.

How is it different from Twitter, FB?

Twitter and Facebook are social networking services. Although you can send a message to other people, these services are not really designed for effective, real-time communication. They are more like publications, where you can post a message, a longer article, photo or just anything for people to see, when they come over to your profile.

In theory, the XMPP in it’s core can do all that can be done on Twitter and Facebook and also so much more. It’s just a matter of implementing apps that can make use of all the XMPP capabilities.

The Tigase XMPP Server could serve as a social networking platform out of the box and there already are systems like this. Our focus, however, is on real-time communication, hence our apps are designed as effective messaging clients.

How is it different from Skype, ICQ, AIM, FB Messenger, iMessage and others big names?

Ok, so, there are chat / messaging systems available already. They are instant and near real-time. Big brands are behind them they are not going anywhere any time soon. They also offer voice and video calls and all the features and maybe even more.

How XMPP is different and how Tigase is different and better then?

First of all XMPP is a public and open standard. So, you know what is under the hood, how it works, you can evaluate if it is secure. You can easily create own tools, apps, servers to connect to the world wide XMPP network. Well, the XMPP by desgin is extensible, so you can easily customize and extend the basic XMPP protocol with more features and capabilities.

None of this is true for the big name systems.

You do not really know how your messages are sent and delivered by the big names. How your personal data is handled. Even if you assume, they are big with big pockets, so they can implement secure systems and can take care of your data. There are other important questions: It safe? Who has access to it? Would they sell your profile to third-party?

XMPP and Tigase for that matter allows you to deploy your own instant communication system, independent from any other, you keep all your data, you control everything, you decide what is allowed, who can communicate with whom. And still while having independent system for your needs, you can communicate with other users who are on XMPP.

And if you want some extra features, customization, there is no way to have it on the big name systems. You just have to rely on what is there and adjust yourself to what is available.

How is it different from Slack?

And again, it all sounds like Slack. So similar in every aspect. Is there any difference?

Indeed there is. In principle XMPP has all the same features as Slack has. Probably even some more. The main differene is that with XMPP you can choose software vendor (Tigase is one of them but there are many others), deploy your own system, independent which is under your full control, you keep your data and you decide what happens with them.

December 05, 2019 01:08

December 03, 2019

The XMPP Standards Foundation

XMPP in all languages! 03 Dec 2019

Welcome to the XMPP newsletter covering the month of November 2019.

Help us sustain this as a community effort, of which process is fully documented.

Articles

Edivaldo Brito has written in Portugese "Como instalar o moderno cliente Jabber/XMPP Dino no Linux via Snap" ("How to install the modern Jabber/XMPP Dino client on Linux via Snap").

Dino

Last month, the XMPP newsletter has been translated by community members:

We are extremely grateful for these contributions! If you can contribute a translation in your own language, please contact the CommTeam.

Software releases

Servers

The Ignite Realtime community has released:

Metronome IM v3.13.0 has been released, read the changelog.

Clients and applications

Movim 0.16 – Cesco has been released, with Drawing and content sharing, Attachments, Chat improvements, and Chats, chatrooms list improvements, and Search, plus more.

Movim

gajim.org features a new website. It is planned to publish a post about Gajim's development every month, starting with October and November.

SàT lead developer Goffi has published his progress note 2019-W48.

Conversations 2.6.0 has been released.

Converse.js 5.0.5 has been released.

Libraries

xmpp.js has been released in versions 0.9.0 and 0.9.1.

Process One has announced the released of go-xmpp 0.3.0.

Extensions and specifications

This month, nothing in Last Call, Proposed, Obsoleted.

Message Retraction

Version 0.1.0 of XEP-0424 (Message Retraction) has been released.

Abstract: This specification defines a method for indicating that a message should be retracted.

Changelog: Accepted by vote of Council on 2019-10-23. (XEP Editor (jcb))

URL: https://xmpp.org/extensions/xep-0424.html

Message Moderation

Version 0.1.0 of XEP-0425 (Message Moderation) has been released.

Abstract: This specification defines a method for groupchat moderators to moderate messages.

Changelog: Accepted by vote of Council on 2019-10-16. (XEP Editor (jcb))

URL: https://xmpp.org/extensions/xep-0425.html

Updated

  • Version 1.23.1 of XEP-0001 (XMPP Extension Protocols) has been released.
  • Version 1.17.0 of XEP-0060 (Publish-Subscribe) has been released.
  • Version 1.0.1 of XEP-0076 (Malicious Stanzas) has been released.
  • Version 1.1.4 of XEP-0084 (User Avatar) has been released.
  • Version 1.0.1 of XEP-0158 (CAPTCHA Forms) has been released.
  • Version 0.3.0 of XEP-0423 (XMPP Compliance Suites 2020) has been released.
  • Version 0.2 of XEP-0328 (JID Preparation and Validation Service) has been released.
  • Version 0.3 of XEP-0372 (References) has been released.
  • Version 0.7.0 of XEP-0392 (Consistent Color Generation) has been released.
  • Version 0.2.0 of XEP-0393 (Message Styling) has been released.
  • Version 1.1.0 of XEP-0410 (MUC Self-Ping (Schrödinger's Chat)) has been released.
  • Version 0.2.0 of XEP-0420 (Stanza Content Encryption) has been released.

Thanks all!

This XMPP Newsletter is produced collaboratively by the community.

Thanks to Nyco, Guus, Wurstsalat, MDosch, Neustradamus, Ppjet6 for their help in creating it!

Please share the news on "social networks":

License

This newsletter is published under CC by-sa license: https://creativecommons.org/licenses/by-sa/4.0/

by nyco at December 03, 2019 14:00

Monal IM

iOS 4.1 and 4.2

I have released iOS 4.1 that has the temporary fix for notifications in ios13. I am now working on 4.2 which should have a lots of fixes to long running issues. I hope this update also improves muc experience. The work I am doing is part of my long planned stability and muc improvement update. This release should also see the return to the French App Store as well. A Mac OS Catalina build is a stretch goal.

by Anu at December 03, 2019 11:30

November 29, 2019

ProcessOne

go-xmpp 0.3.0

A new version of the go-xmpp library, which can be used to write XMPP clients or components in Go, as been released. It’s available on GitHub.

Upon new features, it adds a websocket transport. For this reason, the minimum go version to use it is now 1.13. It also adds a SendIQ method, to send iq stanza and receive the response asynchronously on a channel.
On the component side, it fixes a SIGSEGV in xmpp_component (#126) and adds more tests for the Component code.

A small example

Writing an XMPP component

Speaking of components, here is a simple example on how to create a simple one. As a reminder, components are external services that can communicate with an XMPP service using the Jabber Component Protocol as described in XEP-0114.

A component has its own XMPP domain and must know the server address and service port:

     const (
        domain  = "mycomponent.localhost"
        address = "localhost:8888"
    )

The options needed when creating a new component are defined as follow (secret must match the one define in the server config):

    opts := xmpp.ComponentOptions{
        TransportConfiguration: xmpp.TransportConfiguration{
            Address: address,
            Domain:  domain,
        },
        Domain:   domain,
        Secret:   "secret",
    }

To actually create the simplest component, just create a default route, and pass it to NewComponent, as well as the above options:

    router := xmpp.NewRouter()
    c, err := xmpp.NewComponent(opts, router)

Connect establishes the XMPP connection to the server, and authenticates with it:

      err := c.Connect()

Now we can try to send a disco iq to the server:

       iqReq := stanza.NewIQ(stanza.Attrs{Type: stanza.IQTypeGet,
        From: domain,
        To:   "localhost",
        Id:   "my-iq1"})
    disco := iqReq.DiscoInfo()
    iqReq.Payload = disco

In order to get the response asynchronously, the sendIq will return a channel where we expect to receive the result iq. We also need to pass it a context to set a timeout:

    ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
    res, _ := c.SendIQ(ctx, iqReq)

Now we just have to wait for our response:

    select {
    case iqResponse := <-res:
        // Got response from server
        fmt.Print(iqResponse.Payload)
    case <-time.After(100 * time.Millisecond):
        cancel()
        panic("No iq response was received in time")
    }

Full example

The full program that runs a component, connects it to a XMMP server and perform a disco on it to display the result is thus as is:

package main

import (
    "context"
    "fmt"
    "time"

    xmpp "github.com/FluuxIO/go-xmpp"
    "gosrc.io/xmpp/stanza"
)

const (
    domain  = "mycomponent.localhost"
    address = "build.vpn.p1:8888"
)

// Init and return a component
func makeComponent() *xmpp.Component {
    const (
        domain  = "mycomponent.localhost"
        address = "build.vpn.p1:8888"
    )
    opts := xmpp.ComponentOptions{
        TransportConfiguration: xmpp.TransportConfiguration{
            Address: address,
            Domain:  domain,
        },
        Domain: domain,
        Secret: "secret",
    }
    router := xmpp.NewRouter()
    c, err := xmpp.NewComponent(opts, router)
    if err != nil {
        panic(err)
    }
    return c
}

func main() {
    c := makeComponent()

    // Connect Component to the server
    fmt.Printf("Connecting to %v\n", address)
    err := c.Connect()
    if err != nil {
        panic(err)
    }

    // make a disco iq
    iqReq := stanza.NewIQ(stanza.Attrs{Type: stanza.IQTypeGet,
        From: domain,
        To:   "localhost",
        Id:   "my-iq1"})
    disco := iqReq.DiscoInfo()
    iqReq.Payload = disco

    // res is the channel used to receive the result iq
    ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
    res, _ := c.SendIQ(ctx, iqReq)

    select {
    case iqResponse := <-res:
        // Got response from server
        fmt.Print(iqResponse.Payload)
    case <-time.After(100 * time.Millisecond):
        cancel()
        panic("No iq response was received in time")
    }
}

by Jérôme Sautret at November 29, 2019 16:08

November 28, 2019

Jérôme Poisson

SàT progress note 2019-W48

It's time for a new progress note.

In last one I was talking about my attempt to optimize the chat history with Cagou on Android. Indeed, while scrolling through history is smooth on desktop, it's quite slow on Android (but not dramatic).

My plan was to use RecycleView, which is an optimized widget to show a big list of widgets.

But, as I've explained last time, RecycleView has troubles with widgets with dynamic height (which is the case for chat messages, height depends of content). While working on a workaround (the idea was to pre-render each widget without displaying them, and use the calculated size), I've realized that even RecycleView was not so smooth on Android, and that was complicating a lot the code.

So I took the (hard) decision to abandon this idea, it was taking too much time without perspective of good results. Sometimes it's a good thing to step back and save time for other things.

Instead, I've simplified a bit other parts of the code, and thanks to a blog post on Kivy website that I've read recently, I've used the idea of a delayed resize (the message history is complex to resize because there are many elements – messages with styling, images, avatars, nickname, timestamp, receipt flags, etc. –), meaning that when you resize the window or the messages widgets, the resize is not done immediately, but only after a short delay (replacing any previously delayed resize). This way you limit the number of size calculations, and the feeling is better.

Still there is room to improve perfs for chat history scrolling on Android, but that can wait, it's usable enough for now.

I've come across a ticket for python-for-android predicting a hard future on Android: background tasks are more and more difficult to run. I'm not sure how it will evolve and what will be the consequences for SàT, we'll see.

Still on the Kivy side, I've realized that a bug that I had on Android (issue when sliding chat widget with the Carousel) was fixed in dev version. As I have no idea when next version of Kivy will be released and I had to fix this issue immediately, I've done a backport of the dev version of Carousel in Cagou, which I'll remove as soon as the Kivy 2.0 is available.

I've moved to Python 3.8, and I've had to face a couple of troubles.

Twisted, which is a major component of the backend, is not yet Python 3.8 compatible. Fortunately, the issue and fix were easy, so I've reported the issue and proposed a fix.

It is not super pleasant to propose a fix to Twisted because you have to look into the doc (which is rather indigestible), create the ticket on their bugtracker – with a terrible UI – but with a Github account (I have one for contributions only, but I'm don't like to be forced to have it), do the fix, write a test – with a not so pleasant "Believe me, if you write your tests after you write your code, we will know. It's more obvious than you think." making you feel like a little pupil at school –, write a piece of text to generate the changelog automatically, and finally update the ticket to ask for review (then we'll follow usual change request/review until the patch get merged, rejected or abandoned).

On the bright side, the Twisted community is nice, and I have to admit that the code quality of Twisted is really good thanks to their test-driven development (and general competence of people working on it). I'm using Twisted since the beginning of SàT (without any regret so far), and I really love the stability of its API, and that I'm rarely crossing the way of a blocking bug. I just have the feeling that it could be more contributors friendly.

For the same reason as for Kivy, I've backported the patch to sat_tmp to be able to use SàT and SàT PubSub with Python 3.8.

Now I'm working on infinite scroll on Cagou. It is functional but not yet smooth.

That's all for this progress note, as usual feedbacks are welcome.

by goffi at November 28, 2019 21:43

November 27, 2019

Ignite Realtime Blog

inVerse Openfire plugin 5.0.5.1 released

@wroot wrote:

The Ignite Realtime community is happy to announce the release of version 5.0.5.1 of the inVerse plugin for Openfire!

This update brings changes and fixes from Converse 5.0.5.

Your instance of Openfire should automatically display the availability of the update. Alternatively, you can download the new release of the plugin at the inVerse plugin’s archive page

For other release announcements and news follow us on Twitter

Posts: 1

Participants: 1

Read full topic

by @wroot wroot at November 27, 2019 21:22

November 25, 2019

ProcessOne

Building Realtime Streaming Architectures

Realtime is not only about client interactions. We have been using XMPP & MQTT since a long time to connect people and things together.

However, there is another use case for realtime that is a little less known: realtime streaming architectures. This is a design pattern that you can use to make the core of your applications or your information system realtime.

The principle is simple: you decouple components into isolated services that talk to each other through a data bus, very often Kafka, using the publish & subscribe pattern.

This approach is very powerful, as you can tolerate a component being down for a short maintenance, with the confidence that the system will catch up. The pattern is also useful to tune the computing capacity at each stage of the processing pipeline. And finally, the pattern is also popular to decouple services and make them able to evolve independently.

This is just the technical benefits, but from a high level business perspective, it simply means that you get a more resilient system, that is able to process events in realtime, with a system serving as a basis of new innovative offerings for customers. This leads to a significant business edge.

I had already presented back in 2017 at DotGo Conference what we we had been doing, at a technical level, to build realtime streaming architectures:

Today, we are able to share more by publishing a case study, showing the type of architecture we have been building for Colissimo, a leading postal service in France. Go ahead and read Colissimo case study.

Do not hesitate to contact us, if you need guidance on how to build, improve, troubleshoot or rework such an architecture.

by Mickaël Rémond at November 25, 2019 17:28

Gajim

Development News November 2019

This is the second post of a news series about Gajim’s development. In these posts I (wurstsalat) will try to summarize a month of development around Gajim. Sometimes these posts will also cover python-nbxmpp and XMPP in general. November’s development brought improvements to group chats, theming, drag and drop actions, OMEMO, and more. Feel free to join gajim@conference.gajim.org to discuss with us.

November 25, 2019 00:00

November 24, 2019

Monal IM

More on iOS Pushes

I will release the update that I have been testing or 4.1. This will have reliable pushes again but will not have the message text. Hopefully 4.2 will have reliable pushes with the message content in iOS 13.

by Anu at November 24, 2019 04:14

November 21, 2019

ProcessOne

Real-time Radar #27

ProcessOne curates the Real-time Radar – a newsletter focusing on articles about technology and business aspects of real-time solutions. Here are the articles we found interesting in Issue #27. To receive this newsletter straight to your inbox on the day it is published, subscribe here.

GopherCon 2019 Highlights

ProcessOne CEO had the chance to attend once more the big main annual Go conference: Gophercon 2019. For the first time this year, location changed from Denver to San Diego. Ocean, mild climate and extraordinary venue for the party (on the USS Midway) contributed to a relax and friendly atmosphere.

The State of the XMPP Community

This keynote gives an overview of what various XMPP projects have been doing (implementation wise) over the last ~12 month and will try to explain why some features have been adopted more quickly than others.

Cisco Scores Big with a New IETF-Approved Internet Standard

This June, Cisco achieved a milestone when the Internet Engineering Task Force (IETF) declared their XMPP-Grid architecture an official Internet standard?for security information exchange.

The Death of Jabber

Looking at the two previous links, it seems open XMPP and Cisco XMPP are going forward quite well – albeit in different directions. However, here’s an opposite point of view.

Deep Dive Into MQTT

Message Queuing Telemetry Transport (MQTT) has been relevant for years. It’s enjoying even wider attention now thanks to the explosive growth of IoT, with both consumer and industrial economies deploying distributed networks, edge computing and data-emitting devices as part of everyday operations.

Machine Vision Meets MQTT Messaging

This author covered recognizing objects with the JeVois sensor and sending the data to text-to-speech Processing scripts. Next, they describe passing the recognized object name and location data to other machines and systems, using MQTT.

by Marek Foss at November 21, 2019 10:33

November 17, 2019

Ignite Realtime Blog

HTTP File Upload plugin 1.1.3 released

@wroot wrote:

The Ignite Realtime community is happy to announce the immediate release of version 1.1.3 of the HTTP File Upload plugin for Openfire!

This plugin enables users to share files in one and one and group chats by uploading a file to a server and providing a link.

This update fixes an issue with MIME type not being returned by a webserver.

Your instance of Openfire should automatically display the availability of the update in the next few hours. Alternatively, you can download the new release of the plugin at the HTTP File Upload plugin archive page

For other release announcements and news follow us on Twitter

Posts: 1

Participants: 1

Read full topic

by @wroot wroot at November 17, 2019 17:58

November 15, 2019

ProcessOne

Real-time Radar #26

ProcessOne curates the Real-time Radar – a newsletter focusing on articles about technology and business aspects of real-time solutions. Here are the articles we found interesting in Issue #26. To receive this newsletter straight to your inbox on the day it is published, subscribe here.

Uniting global football fans with an XMPP geocluster

When you are running one of the top sport brands, launching a new innovative app always means it comes with great expectations from your fans. That’s why highly recognised brands turn to ProcessOne.

Get started with NB-IoT and Quectel modules

The year is 2029. Humans are populating the Moon, starting at Moon Base One. Two Moon Base Operators are about to commit a grave mistake in the crop garden of beautiful red tomatoes…

Blockchain and?(I)IoT

Many people try to find a mix of Blockchain and IoT in order to simplify communication between nodes in IoT solutions, increase the communication security, and allow payments between nodes (e.g. a smart device can pay for some services when needed ).

An era of IoT

Message Queuing Telemetry Transport (MQTT) is a M2M and IoT connectivity protocol. It is an open protocol specified by IBM and Eurotech, and recently it is used by the Eclipse foundation in M2M applications.

Crocodile solar pool sensor

This instructable shows how to build a rather special pool sensor measuring the pool temperature and transmitting it via WiFi to Blynk App and to a MQTT broker. It uses the Arduino programming environment and an ESP8266 board (Wemos D1 mini pro).

Poets explore the language of push notifications

Has a poem ever made you cry? If you said yes, you’re just one of countless people who’ve been deeply moved by poetry. Now, has a push notification ever made you cry? Maybe not. In fact, we hope not.

New way to video conference

Remember when we had a video calling standard that worked with all mobile phones around the world so you could just call someone up and see them in live video on the other end while talking to them?? Me neither. That never happened.

by Marek Foss at November 15, 2019 11:32

November 14, 2019

ProcessOne

ejabberd 19.09.1

We are announcing a supplemental bugfix release of ejabberd version 19.09.1. The main focus has been to fix the issue with webadmin returning 404 Not Found when Host header doesn’t match anything in configured hosts.

Bugfixes

Some people have reported still having issues when connecting to the web administration console. We solved that hopefully once and for all.

Technical changes

There is no change to perform on the database to move from ejabberd 19.09 to ejabberd 19.09.1. Still, as usual, please, make a backup before upgrading.

Download and install ejabberd 19.09.1

The source package and binary installers are available at ProcessOne. If you installed a previous version, there are no additional upgrade steps, but as a good practice, plase backup your data.

As usual, the release is tagged in the Git source code repository on Github. If you suspect that you’ve found a bug, please search or fill a bug report in Issues.


Full changelog
===========

* Bugfixes
– Fix issue with webadmin returning 404 when ‘Host’ header doesn’t match anything in configured hosts
– Change url to guide in webadmin to working one

by Marek Foss at November 14, 2019 11:58

November 13, 2019

ProcessOne

Swift Server-Side Conference 2019 Highlights: Day 2

The second day of the Swift Server-Side conference was as packed with great talk as the first day. You can read my previous post on workshop and day 1.

Building the next version of the Smoke Framework (Simon Pilkington)

Simon Pilkinson introduced his rework on the Smoke framework, developed as a video ingestion platform for Amazon Prime Video.
This is a Swift framework, that is used to accelerate the development of API in Swift. The framework starts with a Swagger API description and generates all the needed code to provide API matching the specifications.

The version 2 of the framework is on the way and focuses on improving the workflow, as the performance in production are already great.

How we Vapor-ised our Mac app (Matias Piipari)

As expected, many developers in the Swift Server-Side community are coming from either iOS or Mac development. Swift server is for those developers a way to reuse both code and skills to produce server-enabled applications.

Matias talk is about such a story, moving from a pure desktop app to write Research paper on Mac, M to a collaboration tools, usable both on Mac and on the web.

The transition has been successful, but this is only a start, as several shortcuts has been taken to be able to release a version 1. For example, for now the server application is only running on Mac servers, as some pieces of code requires UIKit to be able to build. The next step is to make the code more modular to fully remove the UIKit dependencies on the server components, to be able to run them on Linux.

Supercharging your Web APIs with gRPC (Daniel Alm)

Daniel Alm, presented his work on gRPC Swift. He worked with George Barnett to provide a great support library for building and consuming gRPC services.

gRPC is based on protobuf and allows describing API in protobuf, using protobuf as a format for parameters and responses. It is slowly becoming a de facto standard format for API that are efficient to decode and more stable than JSON ones. You can for example rename a parameter in your code without breaking your clients.

As noted by Ian Partridge:

gRPC Swift is a lot further along than you might think. There is protoc support for generating Swift service stubs plus production ready client and server libraries, and it all runs on SwiftNIO’s implementation of HTTP/2!

And indeed, this is a big piece in the Swift server ecosystem.

If you want to try it, make sure to use the new nio branch, which is based on SwiftNIO.

Building high-tech robots with Swift – a story from the front lines (Gerwin de Haan & Mathieu Barnachon)

Another great talk describing a very practical and impressive use case for Swift in general and Swift server-side more specifically.

Styleshoots is building tools & robots to improve the workflow of studios shooting images and small videos for e-commerce sites.

They started with small robots to take pictures with a high-end DSLR camera, controlled by an iPad.

When they needed to ramp up to larger systems to be able to perform model shots and take small video for social networks as well, they needed server components (for example for remote maintenance and coordination / parameters exchange between multiple robots).

Moving to Swift Server side was a natural fit for them and despite some attempts to try other server side tools like NodeJS, they went back to Swift Server-side. It is a better fit with their team, allowing to reuse both code and skills.

You should check out their web site, as the tools are impressive.

Testing SwiftNIO Systems (Johannes Weiss)

This SwiftNIO test talk was one of my favorites. I am a big fan of testing tools (like Quickcheck) and technics. Johannes Weiss did a great job explaining how hard it is to test networking protocol stacks and how SwiftNIO modularity makes this task much easier. With a pipeline design, you can test one or two ChannelHandlers at a time.

The talk was packed with practical advice. For example, Johannes explained how to leverage SwiftNIO tools to help with testing, like EmbeddedChannel and NIOHTTP1TestServer.

I really recommend watching it, when it is released in video.

Maintaining a Library in a Swiftly Moving Ecosystem (Kaitlin Mahar)

Kaitlin Mahar put together a very engaging talk. She described both the process of proposing her Swift Mongo DB library through to Swift Server Work Group incubation process. You can read her proposal here: Officially supported MongoDB Driver.

But she also did more and gave many pieces of advice to help library maintainers improve their version management and API evolution. For example, in no particular order:
– Use semantic versioning
– Use @available attribute to let your users know about API changes and deprecation
– Prepare release notes and explains the reasons beyond your changes
– Prepare code migration guide in case of big changes in the API. Don’t let your users figure out by themselves.
– Set up your CI/CD to run test under all the supported OS and Swift versions.
– …

Fluently NoSQL: Creating FluentDynamoDB (Joe Smith)

Joe is maintainer of Fluent-DynamoDB library and is contributing to AWS-SDK-Swift. He explained how he got to write those tools to help improve the alerting platform at Slack.

Full stack Swift development: how and why (Ivan Andriollo)

The final talk by Ivan Andriollo was both a great conclusion for the conference and a great talk in itself.

I must confess that I found this talk unexpectedly good. It was a talk by a consultant, sharing his view on agile programming and why it makes sense to kick-start your project by prototyping for production using Swift Server side and iOS clients. This is a talk that can be easily boring or full of commonplaces.

But Ivan’s talk was both clear, exciting and presenting the ideas in a very convincing way. This is an approach I agree with, as build both Swift servers and clients is an approach we have been starting to deploy at ProcessOne.

To summarize in a few words, if you want to be efficient and produce a prototype of a mobile service, that can go to production, you can use a team of Swift iOS and Swift server developers to iterate fast toward the result. This is indeed the most efficient approach, limiting coordination cost between separate team. You can always add an Android client in a second stage, using the same API server (for example using gRPC).

This is a great conclusion for the conference, as this approach is the very essence and the most obvious raison-d’être of the Swift Server-Side ecosystem.

Conclusion

Swift Server-side conference 2019 has been my favorite conference this year. The talks were very deep and interesting, the venue was beautiful, and the food has been great. But, most of all, the gathering of such a good number of passionate people, working together to make Swift on the server a reality, has been an exhilarating experience. You all know that I am a creator, loving to build new stuff. After Swift Server-Side conference, I went back home with new friends and the confidence that, led by such a group of people, Swift Server-Side is going to progress steadily in the coming months.

by Mickaël Rémond at November 13, 2019 20:44

SwiftNIO: Introduction to Channels, ChannelHandlers and Pipelines

Let’s keep on exploring the concepts behind SwiftNIO by playing with Channels, ChannelHandlers and ChannelPipelines.

This article was originally published in Mastering SwiftNIO, a new book exploring practical implementations of SwiftNIO. If you are new to SwiftNIO, you may want to first checkout my previous article on SwiftNIO Futures and Promises.

What are SwiftNIO channels?

Channels are at the heart of SwiftNIO. They are responsible for many things in SwiftNIO:

  1. Thread-safety. A channel is associated for its lifetime to an EventLoop. All events processed for that channel are guaranteed to be triggered by SwiftNIO framework in the same EventLoop. It means that the code you provide for a given channel is thread-safe (as long as you respect a few principles when adding your custom code). It also means that the ordering of the events happening on a given channel is guaranteed. SwiftNIO let you focus on the business logic, handling the concurrency by design.
  2. Abstraction layer between application and transport. A channel his keeping the link with the underlying transport. For example, a SocketChannel, used in TCP/IP clients or servers, is keeping the link to its associated TCP/IP socket. It means that each new TCP/IP connection will get their own channel. In SwiftNIO, developers are dealing with channels, a high level abstraction, not directly with sockets. The channel itself takes care of the interactions by addressing the underlying socket.
  3. Applying the protocol workflow through dynamic pipelines. A channel coordinates its events and data flow , through an associated ChannelPipeline, containing ChannelHandlers.

At this stage, the central role of channel may seem quite difficult to understand, but you will get a more concrete view as we progress through our example.

Step 1: Bootstrapping your client or server with templates

Before we can play with channels, pipelines and handlers, we need to setup the structure of our networking application.

Thus, the first step, when you need to build a client library or a server framework, is to start by setting up the “master” Channel and tying it to an EventLoopGroup.

That task can be tedious and error prone, that’s why the SwiftNIO project provides Bootstrap helpers for common use cases. It offers, for example:

  • A ClientBootstrap to setup TCP/IP clients.
  • A ServerBootstrap to setup TCP/IP servers.
  • A DatagramBootstrap to setup UDP clients or servers.

Setting up the connection

Here is the minimal client setup:

// 1
// Creating a single-threaded EventLoop group is enough
// for a client.
let evGroup = MultiThreadedEventLoopGroup(numberOfThreads: 1)
defer {
    try! evGroup.syncShutdownGracefully()
}

// 2
// The basic component to help you write a TCP client is ClientBootstrap. You
// also have a serverBootstrap to set up a default TCP server for you.
let bootstrap = ClientBootstrap(group: evGroup)
    .channelOption(ChannelOptions.socket(SocketOptionLevel(SOL_SOCKET), SO_REUSEADDR), value: 1)

do {
    // 3
    // Connect to the server
    let channel = try bootstrap.connect(host: "towel.blinkenlights.nl", port: 23).wait()    
} catch let err {
    print(err)
}

As you can see, we setup the client using three major steps:
1. Create the EventLoopGroup.
2. Create the ClientBootstrap.
3. Connect to the server in a synchronous way, here on remote server on port 23 (telnet).

In SwiftNIO, the ClientBootstrap connect(host:port:) method does more than just triggering a TCP/IP connection. It also “bootstrap” it by setting up the channel, the socket parameters, the link to the channel event loop, and performs several other housekeeping operations.

Note on Threads & Blocking Operations:

In our example, the TCP/IP connection establishment is synchronous: We wait for the TCP/IP connection to be full active.

In a real client, for example an iOS mobile client, we would just use Channel future as returned by connect(host:port) method, to avoid blocking the main UI thread.

Handling errors

The final part of the code is handling errors: as the connection can fail, we catch the possible errors to display them.

In our example, as we are connecting to a famous public “telnet” server (towel.blinkenlights.nl), the connection should work even for you if the network is available.

If you are connecting to localhost instead, where you likely have no telnet server running (you should not), the connection will fail with the following error:

NIOConnectionError(host: "localhost", port: 23, dnsAError: nil, dnsAAAAError: nil, connectionErrors: [NIO.SingleConnectionFailure(target: [IPv6]localhost/::1:23, error: connection reset (error set): Connection refused (errno: 61)), NIO.SingleConnectionFailure(target: [IPv4]localhost/127.0.0.1:23, error: connection reset (error set): Connection refused (errno: 61))])

As you can see, SwiftNIO errors are very precise. Here we clearly see that the connection was refused:

Connection refused (errno: 61)

But if the DNS resolution fails because the host does not exist (for example using localhost2), you would also get a different and relevant error:

NIOConnectionError(host: "localhost2", port: 23, dnsAError: Optional(NIO.SocketAddressError.unknown(host: "localhost2", port: 23)), dnsAAAAError: Optional(NIO.SocketAddressError.unknown(host: "localhost2", port: 23)), connectionErrors: [])

Step 2: Defining your first ChannelInboundHandler

In the current state, the code is of little help. It just opens a TCP/IP connection on the target server, but does not do anything more.

To be able to receive connection events and data, you need associate ChannelHandlers with your Channels.

You have two types of channelHandler available, defined as protocols:

  • The ChannelInboundHandlers are used to process incoming events and data.
  • The ChannelOutboundHandlers are used to process outgoing events and data.

A channel handler can implement either inbound or outbound ChannelHandler protocol or both.

Methods in the protocol are optional, as SwiftNIO provides default implementations. However, you need to properly set up the required type aliases InboundIn and OutboundOut for your handler to work. Generally, you will use SwiftNIO’s ByteBuffer to convey the data at the lowest level. ByteBuffer is an efficient copy-on-write binary buffer. However, you can write handlers that are intended to work at high-level and can transform the data to more protocol-specific, ready to use data types. These types of handler are called “codec” and are responsible for data coding / decoding.

For an inbound channel handler, you have a set of available methods you can implement to process events. Here is a few of them:

  • ChannelActive(context:): Called when the Channel has become active, and is able to send and receive data. In our TCP/IP example, this method is called when the connection is established. You can use that method to perform post-connect operations, like sending the initial data required to open your session.
  • ChannelRead(context:data:): Called when some data has been read from the remote peer. This is called for each set of data that are received over the connection. Note that the data may be split in several calls to that method.
  • ChannelReadComplete(context): Called when the Channel has completed its current read loop.
  • ChannelInactive(context:): Called when the Channel has become inactive and is no longer able to send and receive data. In our TCP/IP example, this method is triggered after the connection has been closed.
  • errorCaught(context:error:): Called when an error happens while receiving data or if an error was encountered in a previous inbound step. This can be called for example when the TCP/IP connection has been lost.

The context parameter receives a ChannelHandlerContext instance. It lets you access important properties, like the channel itself, so that you can for example write back data, going through the outbound sequence of handlers. It contains important helpers that you need to access to write your networking code.

Let’s show a simple InboundChannelHandler implementing only a few methods. In the following code, the handler prints and logs some connection events as they happen (client is connected, client is disconnected, an error occurred):

private final class PrintHandler: ChannelInboundHandler {
    typealias InboundIn = ByteBuffer

    func channelActive(context: ChannelHandlerContext) {
        print("Client is connected to server")
    }

    func errorCaught(context: ChannelHandlerContext, error: Error) {
        print("Channel error: \(error)")
    }

    func channelInactive(context: ChannelHandlerContext) {
        print("Client is disconnected ")
    }
}

The ChannelActive and ChannelInactive methods are called when the connection has been established or closed. The errorCaught method will print any error that occurs during the session.

We will learn more about how the handlers are called in the next section, when talking about the channel’s pipeline.

Step 3: Setting up your channel pipeline

To be able to receive data from the server, you need to add at least one channelHandler to your channelPipeline. You do so by attaching a piece of code to run on each new channel, the channelInitializer. The initializer is called to setup every new channel. That’s the place where you are typically going to define your ChannelPipeline.

What is the ChannelPipeline?

The pipeline organize the sequence of Inbound and Outbound handlers as a chain:

In each handler, you can decide what you want to do with the data you received. You can buffer them, transform them, decide to pass further down in the pipeline chain, etc. An inbound handler can directly decide to react to raw data and post back some data in the outbound channel. As events are processed and refined while progressing through the pipeline, the pipeline and the ChannelHandlers are good way to organise your application to clearly split the networking code from the business logic.

Even though the previous diagram shows for clarity the ChannelInboundHandler and ChannelOutboundHandler instances as separate chains, they are actually part of the same pipeline chain. They are represented as two separate paths, as inbound handlers are only called for inbound event and outbound handlers are only triggered on outbound events. However, a channel has a single pipeline at any given time. The numbers in the diagram show each handler position in the pipeline list.

In other words, when an event is propagated, only the handler that can handle it are triggered. The ChannelInboundHandlers are triggered in order when receiving data for example and the ChannelOutboundHandlers are triggered in pipeline reversed order to send data, as shown on the first diagram.

It means that if a ChannelInboundHandler decides to write something back to the Channel, using its context, the data will skip the ChannelInboundHandler chain and directly get through all ChannelOutboundHandler instances that are located earlier in the ChannelPipeline than the ChannelInboundHandler writing the data. The following diagram shows the data flow in that situation:

Pipeline setup

To setup a your pipeline, you can use the method addHandler(handler:name:position:) on the channel pipeline object. The addHandler method can be called from anywhere, from any thread, so to enforce thread-safety it returns a future. To add several handlers in a row, you can use the future flatmap() or then() methods to chain the addHandler calls or you can prefer the addHandlers(handlers:position:) method.

As channel pipeline are dynamic, you can also remove handlers with the method removeHandler(name:).

For a server, most of the pipeline setup would be done on child channel’s handlers, not on the main server channel. That way, the pipeline handler is attached to each newly connected client channel.

Let’s see in step 4 how to process incoming data through a one-handler pipeline.

Step 4: Opening a data stream & processing it

Blinkenlights server

To demonstrate data reception, we will be using a famous public server, whose role is simply to send data over a simple TCP/IP connection.

The data are just “pages” of text, with ANSI terminal codes to reset the page display and print them “in place”. Using that trick, that server is playing a ASCII art animated version of * Star Wars, Episode IV*, recreated with manual layout.

Even if you do not run the code from an ANSI compliant terminal, you should be able to see all the pages printed at the bottom of your log file and get of feel of the animation.

Updating our handler code

We are going to add two new methods in our handler:

  • ChannelRead(context:data:): As our “protocol” is very basic and sending frames to display on a regular basis, we can just accumulate the data in a ByteBuffer. Our will convert the data to a buffer, using self.unwrapInboundIn(data) method and add it in a temporary buffer.
  • ChannelReadComplete(context): In our example, as we are reading frames, we will be using that method to actually display the data we have previously buffered. We assume that when we have no data available to read, we have read the full frame. We then print the content of our temporary buffer to the terminal at once and empty the buffer.

We also modify the ChannelActive(context:) method to allocate and set up our temporary ByteBuffer. You can reuse the channel allocator from the context to allocate your buffer.

Here is the code of our PrintHandler:

private final class PrintHandler: ChannelInboundHandler {
    typealias InboundIn = ByteBuffer

    var buffer: ByteBuffer?

    func channelActive(context: ChannelHandlerContext) {
        buffer = context.channel.allocator.buffer(capacity: 2000)
        print("Client is connected to server")
    }

    func channelRead(context: ChannelHandlerContext, data: NIOAny) {
        var byteBuffer = self.unwrapInboundIn(data)
        buffer?.writeBuffer(&byteBuffer)

    }

    func channelReadComplete(context: ChannelHandlerContext) {
        if let length = buffer?.readableBytes {
            if let str = buffer?.readString(length: length) {
                print(str)
            }
        }
        buffer?.clear()
    }

    func errorCaught(context: ChannelHandlerContext, error: Error) {
        print("Channel error: \(error)")
    }

    func channelInactive(context: ChannelHandlerContext) {
        print("Client is disconnected ")
    }
}

Note that when reading the data, they are converted to our InboundIn typealias (in that case a ByteBuffer), using unwrapInboundIn() method. There are several provided unwrappers (i.e. to ByteBuffer or FileRegion), but you can also create custom ones.

The overall SwiftNIO code setup is very simple:

let evGroup = MultiThreadedEventLoopGroup(numberOfThreads: 1)
defer {
    try! evGroup.syncShutdownGracefully()
}

let bootstrap = ClientBootstrap(group: evGroup)
    .channelOption(ChannelOptions.socket(SocketOptionLevel(SOL_SOCKET), SO_REUSEADDR), value: 1)
    .channelInitializer { channel in
        channel.pipeline.addHandler(PrintHandler())
        }

// Once the Bootstrap client is setup, we can connect
do {
    _ = try bootstrap.connect(host: "towel.blinkenlights.nl", port: 23).wait()
} catch let err {
    print("Connection error: \(err)")
}

// Wait for return before quitting
_ = readLine()

The main change with previous setup is that we have been adding a channelInitializer, in charge of setting up the channel pipeline with our PrintHandler.

Note: A pipeline is Dynamic

What you need to keep in mind about the channel pipeline is that it can change during the lifetime of the channel. The channelInitializer will be called to setup an initial pipeline. However, you can change it at any time during the life of the channel.

Many protocol implementations are using this feature to model protocol state switching during the communication between a client and server.

Step 5: Running the code

You can check the full example code from Github: Blinkenlitghts.

Build and run it with:

swift build
swift run

So, finally, when you run the SwiftNIO console application from your terminal, you should be able to see an ASCII art Star Wars, Episode IV story:

Conclusion

This example is very simple but already give you a glimpse at how you are going to organise a SwiftNIO client or server.

With your new knowledge on channel handlers and pipelines, you should be able to understand simple client / server examples, like the echoClient and echoServer examples in SwiftNIO repository.

Channels, Handlers and Pipelines are really at the heart of SwiftNIO architecture. There is a lot more to learn about handler and pipeline, such as handlers implementing protocol coder / decoder (codecs). We will dig into more advanced topics in a next article and in my “Mastering SwiftNIO” book.

In a next post, we will show how to use multiple handlers in the pipeline to process raw data, process data through codec and pass the resulting info to your higher level application handlers.

In the meantime, you should already have a lot to play with.

Please, do not hesitate to ask for questions and share this article, if you liked it.

Photo by chuttersnap on Unsplash

by Mickaël Rémond at November 13, 2019 17:12

SwiftNIO: Understanding Futures and Promises

SwiftNIO is Apple non-blocking networking library. It can be used to write either client libraries or server frameworks and works on macOS, iOS and Linux.

It is built by some of the Netty team members. It is a port of Netty, a high performance networking framework written in Java and adapted in Swift. SwiftNIO thus reuses years of experience designing a proven framework.

If you want to understand in depth how SwiftNIO works, you first have to understand underlying concept. I will start in this article by explaining the concept of futures and promises. The ‘future’ concept is available in many languages, including Javascript and C#, under the name async / await, or in Java and Scala, under the name ‘future’.

Futures and promises

Futures and promises are a set of programming abstractions to write asynchronous code. The principle is quite simple: Your asynchronous code will return a promise instead of the final result. The code calling your asynchronous function is not blocked and can do other operations before it finally decides to block and wait for the result, if / when it really needs to.

Even if the words ‘futures’ and ‘promises’ are often use interchangeably, there is a slight difference in meaning. They represent different points of view on the same value placeholder. As explained in Wikipedia page:

A future is a read-only placeholder view of a variable, while a promise is a writable, single assignment container which sets the value of the future.

In other words, the future is what the client code receives and can use as a handler to access a future value when it has been defined. The promise is the handler the asynchronous code will keep to write the value when it is ready and thus fulfill the promise by returning the future value.

Let’s see in practice how futures and promises work.

SwiftNIO comes with a built-in futures and promises library. The code lies in EventLoopFuture. Don’t be fooled by the name: It is a full-featured ‘future’ library that you can use in your code to handle asynchronous operations.

Let’s see how you can use it to write asynchronous code, without specific reference to SwiftNIO-oriented networking operations.

Note: The examples in this blog post should work both on macOS and Linux.

Anatomy of SwiftNIO future / promise implementation

Step 1: Create an EventLoopGroup

The basic skeleton for our example is as follow:

import NIO

let evGroup = MultiThreadedEventLoopGroup(numberOfThreads: System.coreCount)

// Do things

try evGroup.syncShutdownGracefully()

We create an EventLoopGroup and shut it down gracefully at the end. A graceful shutdown means it will properly terminate the asynchronous jobs being executed.

An EventLoopGroup can be seen as a provider of an execution context for your asynchronous code. You can ask the EventLoopGroup for an execution context: an EventLoop. Basically, each execution context, each EventLoop is a thread. EventLoops are used to provide an environment to run your your concurrent code.

In the previous example, we create as many threads as we have cores on our computer (System.coreCount), but the number of threads could be as low as 1.

Step 2: Getting an EventLoop to execute your promise

In SwiftNIO, you cannot model concurrent execution without at least an event loop. For more info on what I mean by concurrency, you can watch Rob Pike excellent talk: Concurrency is not parallelism.

To execute your asynchronous code, you need to ask the EventLoopGroup for an EventLoop. You can use the method next() to get a new EventLoop, in a round-robin fashion.

The following code gets 10 event loops, using the next() method and prints the event loops information.

import NIO

print("System cores: \(System.coreCount)\n")
let evGroup = MultiThreadedEventLoopGroup(numberOfThreads: System.coreCount)

for _ in 1...10 {
    let ev = evGroup.next()
    print(ev)
}

// Do things

try evGroup.syncShutdownGracefully()

On my system, with 8 cores, I get the following result:

System cores: 8

SelectableEventLoop { selector = Selector { descriptor = 3 }, scheduledTasks = PriorityQueue(count: 0): [] }
SelectableEventLoop { selector = Selector { descriptor = 4 }, scheduledTasks = PriorityQueue(count: 0): [] }
SelectableEventLoop { selector = Selector { descriptor = 5 }, scheduledTasks = PriorityQueue(count: 0): [] }
SelectableEventLoop { selector = Selector { descriptor = 6 }, scheduledTasks = PriorityQueue(count: 0): [] }
SelectableEventLoop { selector = Selector { descriptor = 7 }, scheduledTasks = PriorityQueue(count: 0): [] }
SelectableEventLoop { selector = Selector { descriptor = 8 }, scheduledTasks = PriorityQueue(count: 0): [] }
SelectableEventLoop { selector = Selector { descriptor = 9 }, scheduledTasks = PriorityQueue(count: 0): [] }
SelectableEventLoop { selector = Selector { descriptor = 10 }, scheduledTasks = PriorityQueue(count: 0): [] }
SelectableEventLoop { selector = Selector { descriptor = 3 }, scheduledTasks = PriorityQueue(count: 0): [] }
SelectableEventLoop { selector = Selector { descriptor = 4 }, scheduledTasks = PriorityQueue(count: 0): [] }

The description represents the id of the EventLoop. As you can see, you can use 8 different loops before being assigned again an existing EventLoop from the same group. As expected, this matches our number of cores.

Note: Under the hood, most EventLoops are designed using NIOThread, so that the implementation can be cross-platform: NIO threads are build using Posix Threads. However, some platform specific loops, like NIO Transport service, are free from multiplatform constrains and are using Apple Dispatch library. It means, if you are targeting only MacOS, you can thus use SwiftNIO futures and promises directly with Dispatch library. Libdispatch being shipped with Swift on Linux now, it could also work there, but I did not test it yet.

Step 3: Executing async code

If you just want to execute async code without needing to wait back for a result, you can just pass a function closure to the EventLoop.execute(_:):

import NIO

print("System cores: \(System.coreCount)\n")
let evGroup = MultiThreadedEventLoopGroup(numberOfThreads: System.coreCount)

let ev = evGroup.next()

ev.execute {
    print("Hello, ")
}
// sleep(1)
print("world!")

try evGroup.syncShutdownGracefully()

In the previous code, the order in which “Hello, ” and “world!” are displayed is undetermined.

Still, on my computer, it is clear that they are not executed in order. The print-out in the execute block is run asynchronously, after the execution of the print-out in the main thread:

System cores: 8

world!
Hello, 

You can uncomment the sleep(1) function call to insert one second of delay before the second print-out instruction. It will “force” the ordering by delaying the main thread print-out and have “Hello, world!” be displayed in sequence.

Step 4: Waiting for async code execution

Adding timers in your code to order code execution is a very bad practice. If you want to wait for the async code execution, that’s where ‘futures’ and ‘promises’ comes into play.

The following code will submit an async code to run on an EventLoop. The asyncPrint function will wait for a given delay in the EventLoop and then print the passed string.

When you call asyncPrint, you get a promise in return. With that promise, you can call the method wait() on it, to wait for the completion of the async code.

import NIO

// Async code
func asyncPrint(on ev: EventLoop, delayInSecond: Int, string: String) -> EventLoopFuture<Void> {
    // Do the async work
    let promise = ev.submit {
        sleepAndPrint(delayInSecond: 1, string: string)
        return
    }

    // Return the promise
    return promise
}

func sleepAndPrint(delayInSecond: UInt32, string: String) {
    sleep(delayInSecond)
    print(string)
}

// ===========================
// Main program

print("System cores: \(System.coreCount)\n")
let evGroup = MultiThreadedEventLoopGroup(numberOfThreads: System.coreCount)

let ev = evGroup.next()

let future = asyncPrint(on: ev, delayInSecond: 1, string: "Hello, ")

print("Waiting...")
try future.wait()

print("world!")

try evGroup.syncShutdownGracefully()

The print-out will pause for one second on the “Waiting…” message and then display the “Hello, ” and “world!” messages in order.

Step 5: Promises and futures result

When you need a result, you need to return a promise that will give you more than just a signaling letting you know the processing is done. Thus, it will not be a promise of a Void result, but can return a more complex promise.

First, let’s see a promise of a simple result that cannot fail. In your async code, you can return a promise that will return the result of factorial calculation asynchronously. Your code will promise to return a Double and then submit the job to the EventLoop.

import NIO

// Async code
func asyncFactorial(on ev: EventLoop, n: Double) -> EventLoopFuture<Double> {
    // Do the async work
    let promise = ev.submit { () -> Double in
        return factorial(n: n)
    }

    // Return the promise
    return promise
}

// I would use a BigInt library to go further small number factorial calculation
// but I do not want to introduce an external dependency.
func factorial(n: Double) -> Double {
    if n >= 0 {
        return n == 0 ? 1 : n * factorial(n: n - 1)
    } else {
        return 0 / 0
    }
}

// ===========================
// Main program

print("System cores: \(System.coreCount)\n")
let evGroup = MultiThreadedEventLoopGroup(numberOfThreads: System.coreCount)

let ev = evGroup.next()

let n: Double = 10
let future = asyncFactorial(on: ev, n: n)

print("Waiting...")

let result = try future.wait()

print("fact(\(n)) = \(result)")

try evGroup.syncShutdownGracefully()

The code will be executed asynchronously and the wait() method will return the result:

System cores: 8

Waiting...
fact(10.0) = 3628800.0

Step 6: Success and error processing

If you are doing network operations, like downloading a web page for example, the operation can fail. You can thus handle more complex result, that can be either success or error. SwiftNIO offers a ready made type call ResultType.

In the next example, we will show an async function performing an asynchronous network operation using callbacks and returning a future result of ResultType. The ResultType will wrap either the content of the downloaded page or a failure callback.

import NIO
import Foundation

// =============================================================================
// MARK: Helpers

struct CustomError: LocalizedError, CustomStringConvertible {
    var title: String
    var code: Int
    var description: String { errorDescription() }

    init(title: String?, code: Int) {
        self.title = title ?? "Error"
        self.code = code
    }

    func errorDescription() -> String {
        "\(title) (\(code))"
    }
}

// MARK: Async code
func asyncDownload(on ev: EventLoop, urlString: String) -> EventLoopFuture<String> {
    // Prepare the promise
    let promise = ev.makePromise(of: String.self)

    // Do the async work
    let url = URL(string: urlString)!

    let task = URLSession.shared.dataTask(with: url) { data, response, error in
        print("Task done")
        if let error = error {
            promise.fail(error)
            return
        }
        if let httpResponse = response as? HTTPURLResponse {
            if (200...299).contains(httpResponse.statusCode) {
                if let mimeType = httpResponse.mimeType, mimeType == "text/html",
                    let data = data,
                    let string = String(data: data, encoding: .utf8) {
                    promise.succeed(string)
                    return
                }
            } else {
                // TODO: Analyse response for better error handling
                let httpError = CustomError(title: "HTTP error", code: httpResponse.statusCode)
                promise.fail(httpError)
                return
            }
        }
        let err = CustomError(title: "no or invalid data returned", code: 0)
        promise.fail(err)
    }
    task.resume()

    // Return the promise of a future result
    return promise.futureResult
}

// =============================================================================
// MARK: Main

print("System cores: \(System.coreCount)\n")
let evGroup = MultiThreadedEventLoopGroup(numberOfThreads: System.coreCount)

let ev = evGroup.next()

print("Waiting...")

let future = asyncDownload(on: ev, urlString: "https://www.process-one.net/en/")
future.whenSuccess { page in
    print("Page received")
}
future.whenFailure { error in
    print("Error: \(error)")
}

// Timeout: As processing is async, we can handle timeout by just waiting in
// main thread before quitting.
// => Waiting 10 seconds for completion
sleep(10)

try evGroup.syncShutdownGracefully()

The previous code will either print “Page received” when the page is downloaded or print the error. As your success handler receives the page content itself, you could do something with it (print it, analyse it, etc.)

Step 7: Combining async work results

Where promises really shine is when you would like to chain several async calls that depend on each other. You can thus write a code that appear logically in a sequence, but that is actually asynchronous.

In the following code, we reuse the previous async download function and process several pages by counting the number of div elements in all pages.

By wrapping this processing in a reduce function, we can download all web pages in parallel. We receive the page data as they are downloaded and we keep track of a counter of the number of div per page. Finally, we return the total as the future result.

This is a more involved example that should give you a better taste of what developing with futures and promises looks like.

import NIO
import Foundation

// =============================================================================
// MARK: Helpers

struct CustomError: LocalizedError, CustomStringConvertible {
    var title: String
    var code: Int
    var description: String { errorDescription() }

    init(title: String?, code: Int) {
        self.title = title ?? "Error"
        self.code = code
    }

    func errorDescription() -> String {
        "\(title) (\(code))"
    }
}

// MARK: Async code
func asyncDownload(on ev: EventLoop, urlString: String) -> EventLoopFuture<String> {
    // Prepare the promise
    let promise = ev.makePromise(of: String.self)

    // Do the async work
    let url = URL(string: urlString)!

    let task = URLSession.shared.dataTask(with: url) { data, response, error in
        print("Loading \(url)")
        if let error = error {
            promise.fail(error)
            return
        }
        if let httpResponse = response as? HTTPURLResponse {
            if (200...299).contains(httpResponse.statusCode) {
                if let mimeType = httpResponse.mimeType, mimeType == "text/html",
                    let data = data,
                    let string = String(data: data, encoding: .utf8) {
                    promise.succeed(string)
                    return
                }
            } else {
                // TODO: Analyse response for better error handling
                let httpError = CustomError(title: "HTTP error", code: httpResponse.statusCode)
                promise.fail(httpError)
                return
            }
        }
        let err = CustomError(title: "no or invalid data returned", code: 0)
        promise.fail(err)
    }
    task.resume()

    // Return the promise of a future result
    return promise.futureResult
}

// =============================================================================
// MARK: Main

print("System cores: \(System.coreCount)\n")
let evGroup = MultiThreadedEventLoopGroup(numberOfThreads: System.coreCount)

var futures: [EventLoopFuture<String>] = []

for url in ["https://www.process-one.net/en/", "https://www.remond.im", "https://swift.org"] {
    let ev = evGroup.next()
    let future = asyncDownload(on: ev, urlString: url)
    futures.append(future)
}


let futureResult = EventLoopFuture.reduce(0, futures, on: evGroup.next()) { (count: Int, page: String) -> Int in
    let tok =  page.components(separatedBy:"<div")
    let p_count = tok.count-1
    return count + p_count
}

futureResult.whenSuccess { count in
    print("Result = \(count)")
}
futureResult.whenFailure { error in
    print("Error: \(error)")
}

// Timeout: As processing is async, we can handle timeout by just waiting in
// main thread before quitting.
// => Waiting 10 seconds for completion
sleep(10)

try evGroup.syncShutdownGracefully()

This code actually builds a pipeline as follows:

Conclusion

Futures and promises are at the heart of SwiftNIO design. To better understand SwiftNIO architecture, you need to understand the futures and promises mechanism.

However, there is more concepts that you need to master to fully understand SwiftNIO. Most notably, inbound and outbound channels allow you to structure your networking code into reusable components executed in a pipeline.

I will cover more SwiftNIO concepts in a next blog post. In the meantime, please send us your feedback :)

by Mickaël Rémond at November 13, 2019 17:12

Swift Server-Side Conference 2019 Highlights: Workshop & Day 1

Swift is mostly known nowadays as the main programming language you can use to develop on Apple devices. However Swift, being Open Source, has a small community of dedicated people that have started to work on building an ecosystem to make Swift development on the server-side a viable option.

Swift Server-Side is a fairly new conference dedicated to running Swift applications on the server. In practice, many people from the Swift server-side ecosystem attend this conference. I attended the second edition of this conference, held from Oct 30 to Nov 1, 2019 in Copenhagen.

Overall Impressions

I had missed the first edition last year in Berlin, but this second edition was a nice opportunity to meet with the community. Some people are still a bit reluctant to bet on Swift on Linux due to the fear of Apple being too prominent in the development of the server ecosystem.

However, this is not a worry, if I should judge from the mindset of the community. The crowd was extremely involved in the Swift server-side environment, coming from various companies. People came here not to listen to Apple directions. They got there because they are passionate about Swift and feel it is a very good fit on the server, a good middle ground between Rust and Go.

The developers there are well aware of the current weakness of Swift on the server, and are not waiting for a large company to solve them. The community created the Swift Server Work Group (SSWG), is working on sharing common pieces of code between frameworks to avoid redundant work. The SSWG has a plan and the community is building piece by piece the missing parts in the ecosystem, from Open Foundation improvements (the standard library) to various types of drivers and libraries.

What I have seen at work is a vibrant, highly knowledgeable community that has a plan to get to the point where Swift on the server is a solid choice for developers.

Workshops

Day one was dedicated to workshops. I attended two of them and enjoyed working directly with the main developers of each project.

Contributing to SwiftNIO and SSWG

I enjoyed being guided by Johannes Weiss & Cory Benfield on how to contribute to the Swift Server Work Group and on SwiftNIO. They have been happy with the result of their workshop, with 30 pull requests for SwiftNIO to process during the following days.

Build a cloud-native app with Kitura

This workshop was an introduction to Kitura, guided by its lead developer Ian Partridge. It was a nice way to get into Kitura, and see the benefit of using Swift on three projects developed in parallel: A Swift service running on Linux, a MacOS admin dashboard and an iOS client.

You can follow the material of the workshop on your own, using this repository: Kitura SOS Workshop

Conference Day 1

Swift Server Work Group Update (Logan Wright)

It was a nice summary of the progress of the Swift Server Work Group and showing the road ahead for framework and library developers. The bottom line is that the number of committers on all Swift server and library project is growing. By coordinating the effort and by sharing code, the community is hoping to reach the point where developers have access to all the library they need to build their applications. Beside a common network framework (SwiftNIO), we now have Metrics and Logging initiative, as well as several database drivers, part of the SSWG effort.

My personal point of view is that the ecosystem development will accelerate. As Apple has now added support for Swift Package in Xcode, it is possible to write libraries that will work both on iOS, MacOS and Linux. This is going to grow the package set even faster.

You can learn more about the progress here: SSWG Annual Update

Resilient Micro-Services with Vapor (Caleb Kleveter)

Caleb Kleveter did a good job at explaining the best practices to build Micro-Services in Swift, inspired notably by his work on SwiftCommerce.

Some advice applies to micro-services in general, not only Swift ones, especially regarding “external” resilience:

General abstraction rules are more specific to Vapor:

Static site generation in Swift (John Sundell)

John Sundell introduced how he wrote tools to generate static HTML for his various websites (WWDC by Sundell and Swift by Sundell).

He announced that his static site generation tools — Ink, Plot and Publish — will be open sourced by the end of the year. You can follow him on Github to check when the code will be released: github.com/JohnSundell 

API Performance: A macro talk about micro decisions (Joannis Orlandos)

Joannis Orlandos gave us some food for thoughts about API performance, from the different point of view:
– For users: performance is often response time
– For developers: it is more often requests per second
– For sysadmin: it is CPU and memory footprint
– For management: it is development time and hosting cost

SwiftNIO solves most aspects of the server-side performance, with development time being often addressed by frameworks on top of SwiftNIO.

He also shared practical tips, like for example to avoid auto-incremental ID in database, in favor of UUID, to limit locks on the increment code.

Cloud-native Swift micro-services (Ian Partridge)

Ian Partridge did a very good job at presenting the progress of Kitura itself and the set of tools around the Kitura framework. Most notably he mentioned:
– The release of Kitura 2.9
– The SwiftKafka library (a wrapper around …)
– The Hystrix library (not a Swift project directly, but useful to use in micro-service architecture)
Circuit breaker

Finally, he introduced Appsody, a tool to help quickly bootstrap Docker / Kubernetes based microservices. It is a tool that can be used not exclusively for Swift and can help bootstrap Java, Javascript, Rust or Python web services.

Breaking into tech (Heidi Hermann)

Heidi shared her experience and her view on the tech community and gave advice on how we can improve it to be more welcoming. It was a really great talk and she properly demonstrated that, as a whole, we are failing at training and helping people make progress, and bring other views, opinions and backgrounds into our companies.

As she said, most of the tech companies these days are only hiring senior developers. They consider the pressure to deliver is higher than the need to prepare the future. However, when stated this way, it is clear it is not sustainable, especially, as there is a shortage of experienced developers. We all need to work to build the tech community of tomorrow.

Building State Machines in Swift (Cory Benfield)

It was another fantastic talk. Coming from an Erlang background, where state machines are a first-class process type (see gen_fsm and now gen_statem), I really enjoyed Cory’s take on the Swift-friendly approach. Thanks to enum and type checking, Swift help you write robust and safe state machines.

He also shared a lot of nice design tips, showing how to properly encapsulate the states as enums to prevent users of your State machine to mess with it.

Swift Development on Linux (Jonas Schwartz)

Finally, Jonas Schwartz shared his setup and tips to develop in Swift on Linux. While it is clearly a bit rougher than using a Mac to develop in Swift, he showed that it is definitely possible. Once you are set, it can be even be an enjoyable experience.

Server-side Swift Panel

The panel concluded the day with a lucid overview on what is currently working well with Swift Server-Side, like the fact it is already production ready, or the coordination effort of the community being very good, but also covered the missing pieces, like missing integration of async/await in Swift language at the moment and needed improvement on Foundation on Linux.

and Day 2 …

In a next blog post, I will cover day 2 of the Server-Side Swift conference and share my conclusions.

In the meantime, do not hesitate to share your questions, concerns or feedback about Server-Side Swift.

by Mickaël Rémond at November 13, 2019 17:11

Understanding ejabberd OAuth Support & Roadmap

Login and password authentication is still the most commonly used auth mechanism on XMPP services. However, it is causing security concerns because it requires to store the credentials on the client app in order to login again without asking for the password.

Mobile APIs on iOS and Android can let you encrypt the data at REST, but still, it is best not to rely on storing any password at all.

Fortunately, several solutions exist – all supported by ejabberd. You can either use OAuth or Certificate-based authentication. Client certificate management being still quite a tricky issue, I will focus in this post on explaining how to set up and use ejabberd OAuth support.

Understanding ejabberd OAuth Support

The principle of OAuth is simple: OAuth offers a mechanism to let your users generate a token to connect to your service. The client can just keep that token to authenticate and is not required to store the password for subsequent authentications.

Implicit grant

As of ejabberd 19.09, ejabberd supports only the OAuth implicit grant. Implicit grant is often used to let third-party clients — clients you do not control — connect to your server.

The implicit grant requires redirecting the client to a web page, so the client does not even see the login and password of the user. Indeed, as you cannot trust third-party clients, this is the sane thing to do to keep your users’ passwords for being typed directly in any third-party client. You can never be sure that the client will not store it (locally, or worse, in the cloud).

With the implicit grant, the client app directs the user to the sign-in page on your server to authenticate and get the token, often with login and password (but the mechanism can be different and could involve 2FA, for example). Your website then uses a redirect URL that will be passed back to the client, containing the token to use for logging in. The redirect happens usually using client-registered domain or custom URL scheme.

… and password grant

The implicit grant workflow is not ideal if your ejabberd service is only useable with your own client. Using web view redirects can feel cumbersome in your onboarding workflow. As you trust the client, you probably would like to be able to directly call an API with the login and password, get the OAuth token back, and forget about the password. The user experience will be more pleasant and feel more native.

This flow is known in OAuth as the OAuth password grant.

In the upcoming ejabberd version, you will be able to use OAuth password grant as an addition to the implicit grant. The beta feature is already in ejabberd master branch, so you have a good opportunity to try it and share your feedback.

Let’s use ejabberd OAuth Password grant in practice

Step 1: ejabberd configuration

To support OAuth2 in ejabberd, you can add the following directives in ejabberd config file:

# Default duration for generated tokens (in seconds)
# Here the default value is 30 days
oauth_expire: 2592000
# OAuth token generation is enabled for all server users
oauth_access: all
# Check that the client ID is registered
oauth_client_id_check: db

In your web HTTPS ejabberd handler, you also need to add the oauth request handler:

listen:
  # ...
  -
    port: 5443
    ip: "::"
    module: ejabberd_http
    tls: true
    request_handlers:
      # ...
      "/oauth": ejabberd_oauth

Note: I am using HTTPS, even for a demo, as it is mandatory to work on iOS. During the development phase, you should create your own CA to add a trusted development certificate to ejabberd. Read the following blog post if you need guidance on how to do that: Using a local development trusted CA on MacOS

You can download my full test config file here: ejabberd.yml

Step 2: Registering an OAuth client

If you produce a first party client, you can bypass the need for OAuth to redirect to your browser to get the token.

As you trust the application you are developing, you can let the user of your app directly enter the login and password inside your client. However, you should never store the password directly, only the OAuth tokens.

In ejabberd, I recommend you first configure an OAuth client, so that it can check that the client id is registered.

You can use the ejabberdctl command oauth_add_client_password, or use the Erlang command line.

Here is how to use ejabberdctl to register a first-party client:

ejabberdctl oauth_add_client_password <client_id> <client_name> <secret>

As the feature is still in development, you may find it easier to register your client directly using Erlang command-line. Parameters are client_id, client_name and a secret:

1> ejabberd_oauth:oauth_add_client_password(<<"client-id-Iegh7ooK">>, <<"Demo client">>, <<"3dc8b0885b3043c0e38aa2e1dc64">>).
{ok,[]}

Once you have registered a client, you can start generating OAuth tokens for your users from your client, using an HTTPS API.

Step 3: Generation a password grant token

You can use the standard OAuth2 password grant query to get a bearer token for a given user. You will need to pass the user JID and the password. You need to require the OAuth scope sasl_auth so that the token can be used to authentication directly in the XMPP flow.

Note: As you are passing the client secret as a parameter, you must use HTTPS in production for those queries.

Here is an example query to get a token using the password grant flow:

curl -i -POST 'https://localhost:5443/oauth/token' -d grant_type=password -d username=test@localhost -d password=test -d client_id=client-id-Iegh7ooK  -d client_secret=3dc8b0885b3043c0e38aa2e1dc64 -d scope=sasl_auth

HTTP/1.1 200 OK
Content-Length: 114
Content-Type: application/json;charset=UTF-8
Cache-Control: no-store
Pragma: no-cache

{"access_token":"DGV4JFzW15iZFmsnvzT7IymupTAYvo6U","token_type":"bearer","scope":"sasl_auth","expires_in":2592000}

As you can see, the token is a JSON string. You can easily extract the access_token from it. That’s the part you will use to authenticate on XMPP.

Step 4: Connecting on XMPP using an OAuth token

To authenticate over XMPP, you need to use the X-OAUTH2 mechanism. X-OAUTH2 was defined by Google for Google Talk and reused later by Facebook chat. You can find Google description here: XMPP OAuth 2.0 Authorization.

Basically, it encodes the JID and token as in the SASL PLAIN authorisation, but instead of passing the PLAIN keyword as mechanism, it uses X-OAUTH2. ejabberd will thus know that it has to check the secret against the token table in the database, instead of checking the credentials against the password table.

Quick demo

Next, let’s demonstrate the connection using Fluux Go XMPP library, which is the only library I know that supports OAuth tokens today.

Here is an example client login on XMPP with an OAuth2 token:

package main

import (
    "fmt"
    "log"
    "os"

    "gosrc.io/xmpp"
    "gosrc.io/xmpp/stanza"
)

func main() {
    config := xmpp.Config{
        Address:      "localhost:5222",
        Jid:          "test@localhost",
        Credential:   xmpp.OAuthToken("DGV4JFzW15iZFmsnvzT7IymupTAYvo6U"),
        StreamLogger: os.Stdout,
    }

    router := xmpp.NewRouter()
    router.HandleFunc("message", handleMessage)

    client, err := xmpp.NewClient(config, router)
    if err != nil {
        log.Fatalf("%+v", err)
    }

    // If you pass the client to a connection manager, it will handle the reconnect policy
    // for you automatically.
    cm := xmpp.NewStreamManager(client, nil)
    log.Fatal(cm.Run())
}

func handleMessage(s xmpp.Sender, p stanza.Packet) {
    msg, ok := p.(stanza.Message)
    if !ok {
        _, _ = fmt.Fprintf(os.Stdout, "Ignoring packet: %T\n", p)
        return
    }

    _, _ = fmt.Fprintf(os.Stdout, "Body = %s - from = %s\n", msg.Body, msg.From)
}

The important part for OAuth is that you are telling the library to use an OAuth2 token with the following value in the xmpp.Config struct:

xmpp.Config{
    // ...
        Credential:   xmpp.OAuthToken("DGV4JFzW15iZFmsnvzT7IymupTAYvo6U"),
        }

You can check the example in Fluux XMPP example directory: xmpp_oauth2.go

There is more

As I said, ejabberd OAuth support is not limited to generating password grant. Since ejabberd 15.09, we support implicit grant generation and it is still available. You can find more information in ejabberd documentation: OAuth

Moreover, there is more than XMPP authentication with OAuth 2. In the current development version, you can authenticate your devices on ejabberd MQTT service using MQTT 5.0 Enhanced Authentication. The authentication method to use is the same as for XMPP: We reuse the X-OAUTH2 method name. When trying to use this method, the server will confirm you are allowed to use that method and you can pass your token in return.

Please, note that you will need to use an MQTT 5.0 client library to use OAuth2 authentication with MQTT.

Conclusion

ejabberd OAuth XMPP and MQTT authentication is using the informal auth mechanism that was introduced by Google Talk and reused by Facebook. It does the job and fills an important security need.

That said, I would love to see more standard support from the XMPP Standard Foundation regarding OAuth authentication. For example, getting a specification translating OAuth authentication to XMPP flow would be of great help.

Still, in the meantime, I hope more libraries support that informal OAuth specification, so that client developers have good alternative to local password storage for subsequent authentications.

Please, give it a try from master and send us feedback if you want to help us shape the evolution of OAuth support in ejabberd.

… And let’s end password-oriented client authentication :)

by Mickaël Rémond at November 13, 2019 17:11