Planet Jabber

November 17, 2019

Ignite Realtime Blog

HTTP File Upload plugin 1.1.3 released

@wroot wrote:

The Ignite Realtime community is happy to announce the immediate release of version 1.1.3 of the HTTP File Upload plugin for Openfire!

This plugin enables users to share files in one and one and group chats by uploading a file to a server and providing a link.

This update fixes an issue with MIME type not being returned by a webserver.

Your instance of Openfire should automatically display the availability of the update in the next few hours. Alternatively, you can download the new release of the plugin at the HTTP File Upload plugin archive page

For other release announcements and news follow us on Twitter

Posts: 1

Participants: 1

Read full topic

by @wroot wroot at November 17, 2019 17:58

November 15, 2019

ProcessOne

Real-time Radar #26

ProcessOne curates the Real-time Radar – a newsletter focusing on articles about technology and business aspects of real-time solutions. Here are the articles we found interesting in Issue #26. To receive this newsletter straight to your inbox on the day it is published, subscribe here.

Uniting global football fans with an XMPP geocluster

When you are running one of the top sport brands, launching a new innovative app always means it comes with great expectations from your fans. That’s why highly recognised brands turn to ProcessOne.

Get started with NB-IoT and Quectel modules

The year is 2029. Humans are populating the Moon, starting at Moon Base One. Two Moon Base Operators are about to commit a grave mistake in the crop garden of beautiful red tomatoes…

Blockchain and?(I)IoT

Many people try to find a mix of Blockchain and IoT in order to simplify communication between nodes in IoT solutions, increase the communication security, and allow payments between nodes (e.g. a smart device can pay for some services when needed ).

An era of IoT

Message Queuing Telemetry Transport (MQTT) is a M2M and IoT connectivity protocol. It is an open protocol specified by IBM and Eurotech, and recently it is used by the Eclipse foundation in M2M applications.

Crocodile solar pool sensor

This instructable shows how to build a rather special pool sensor measuring the pool temperature and transmitting it via WiFi to Blynk App and to a MQTT broker. It uses the Arduino programming environment and an ESP8266 board (Wemos D1 mini pro).

Poets explore the language of push notifications

Has a poem ever made you cry? If you said yes, you’re just one of countless people who’ve been deeply moved by poetry. Now, has a push notification ever made you cry? Maybe not. In fact, we hope not.

New way to video conference

Remember when we had a video calling standard that worked with all mobile phones around the world so you could just call someone up and see them in live video on the other end while talking to them?? Me neither. That never happened.

by Marek Foss at November 15, 2019 11:32

November 14, 2019

ProcessOne

ejabberd 19.09.1

We are announcing a supplemental bugfix release of ejabberd version 19.09.1. The main focus has been to fix the issue with webadmin returning 404 Not Found when Host header doesn’t match anything in configured hosts.

Bugfixes

Some people have reported still having issues when connecting to the web administration console. We solved that hopefully once and for all.

Technical changes

There is no change to perform on the database to move from ejabberd 19.09 to ejabberd 19.09.1. Still, as usual, please, make a backup before upgrading.

Download and install ejabberd 19.09.1

The source package and binary installers are available at ProcessOne. If you installed a previous version, there are no additional upgrade steps, but as a good practice, plase backup your data.

As usual, the release is tagged in the Git source code repository on Github. If you suspect that you’ve found a bug, please search or fill a bug report in Issues.


Full changelog
===========

* Bugfixes
– Fix issue with webadmin returning 404 when ‘Host’ header doesn’t match anything in configured hosts
– Change url to guide in webadmin to working one

by Marek Foss at November 14, 2019 11:58

November 13, 2019

ProcessOne

Swift Server-Side Conference 2019 Highlights: Day 2

The second day of the Swift Server-Side conference was as packed with great talk as the first day. You can read my previous post on workshop and day 1.

Building the next version of the Smoke Framework (Simon Pilkington)

Simon Pilkinson introduced his rework on the Smoke framework, developed as a video ingestion platform for Amazon Prime Video.
This is a Swift framework, that is used to accelerate the development of API in Swift. The framework starts with a Swagger API description and generates all the needed code to provide API matching the specifications.

The version 2 of the framework is on the way and focuses on improving the workflow, as the performance in production are already great.

How we Vapor-ised our Mac app (Matias Piipari)

As expected, many developers in the Swift Server-Side community are coming from either iOS or Mac development. Swift server is for those developers a way to reuse both code and skills to produce server-enabled applications.

Matias talk is about such a story, moving from a pure desktop app to write Research paper on Mac, M to a collaboration tools, usable both on Mac and on the web.

The transition has been successful, but this is only a start, as several shortcuts has been taken to be able to release a version 1. For example, for now the server application is only running on Mac servers, as some pieces of code requires UIKit to be able to build. The next step is to make the code more modular to fully remove the UIKit dependencies on the server components, to be able to run them on Linux.

Supercharging your Web APIs with gRPC (Daniel Alm)

Daniel Alm, presented his work on gRPC Swift. He worked with George Barnett to provide a great support library for building and consuming gRPC services.

gRPC is based on protobuf and allows describing API in protobuf, using protobuf as a format for parameters and responses. It is slowly becoming a de facto standard format for API that are efficient to decode and more stable than JSON ones. You can for example rename a parameter in your code without breaking your clients.

As noted by Ian Partridge:

gRPC Swift is a lot further along than you might think. There is protoc support for generating Swift service stubs plus production ready client and server libraries, and it all runs on SwiftNIO’s implementation of HTTP/2!

And indeed, this is a big piece in the Swift server ecosystem.

If you want to try it, make sure to use the new nio branch, which is based on SwiftNIO.

Building high-tech robots with Swift – a story from the front lines (Gerwin de Haan & Mathieu Barnachon)

Another great talk describing a very practical and impressive use case for Swift in general and Swift server-side more specifically.

Styleshoots is building tools & robots to improve the workflow of studios shooting images and small videos for e-commerce sites.

They started with small robots to take pictures with a high-end DSLR camera, controlled by an iPad.

When they needed to ramp up to larger systems to be able to perform model shots and take small video for social networks as well, they needed server components (for example for remote maintenance and coordination / parameters exchange between multiple robots).

Moving to Swift Server side was a natural fit for them and despite some attempts to try other server side tools like NodeJS, they went back to Swift Server-side. It is a better fit with their team, allowing to reuse both code and skills.

You should check out their web site, as the tools are impressive.

Testing SwiftNIO Systems (Johannes Weiss)

This SwiftNIO test talk was one of my favorites. I am a big fan of testing tools (like Quickcheck) and technics. Johannes Weiss did a great job explaining how hard it is to test networking protocol stacks and how SwiftNIO modularity makes this task much easier. With a pipeline design, you can test one or two ChannelHandlers at a time.

The talk was packed with practical advice. For example, Johannes explained how to leverage SwiftNIO tools to help with testing, like EmbeddedChannel and NIOHTTP1TestServer.

I really recommend watching it, when it is released in video.

Maintaining a Library in a Swiftly Moving Ecosystem (Kaitlin Mahar)

Kaitlin Mahar put together a very engaging talk. She described both the process of proposing her Swift Mongo DB library through to Swift Server Work Group incubation process. You can read her proposal here: Officially supported MongoDB Driver.

But she also did more and gave many pieces of advice to help library maintainers improve their version management and API evolution. For example, in no particular order:
– Use semantic versioning
– Use @available attribute to let your users know about API changes and deprecation
– Prepare release notes and explains the reasons beyond your changes
– Prepare code migration guide in case of big changes in the API. Don’t let your users figure out by themselves.
– Set up your CI/CD to run test under all the supported OS and Swift versions.
– …

Fluently NoSQL: Creating FluentDynamoDB (Joe Smith)

Joe is maintainer of Fluent-DynamoDB library and is contributing to AWS-SDK-Swift. He explained how he got to write those tools to help improve the alerting platform at Slack.

Full stack Swift development: how and why (Ivan Andriollo)

The final talk by Ivan Andriollo was both a great conclusion for the conference and a great talk in itself.

I must confess that I found this talk unexpectedly good. It was a talk by a consultant, sharing his view on agile programming and why it makes sense to kick-start your project by prototyping for production using Swift Server side and iOS clients. This is a talk that can be easily boring or full of commonplaces.

But Ivan’s talk was both clear, exciting and presenting the ideas in a very convincing way. This is an approach I agree with, as build both Swift servers and clients is an approach we have been starting to deploy at ProcessOne.

To summarize in a few words, if you want to be efficient and produce a prototype of a mobile service, that can go to production, you can use a team of Swift iOS and Swift server developers to iterate fast toward the result. This is indeed the most efficient approach, limiting coordination cost between separate team. You can always add an Android client in a second stage, using the same API server (for example using gRPC).

This is a great conclusion for the conference, as this approach is the very essence and the most obvious raison-d’être of the Swift Server-Side ecosystem.

Conclusion

Swift Server-side conference 2019 has been my favorite conference this year. The talks were very deep and interesting, the venue was beautiful, and the food has been great. But, most of all, the gathering of such a good number of passionate people, working together to make Swift on the server a reality, has been an exhilarating experience. You all know that I am a creator, loving to build new stuff. After Swift Server-Side conference, I went back home with new friends and the confidence that, led by such a group of people, Swift Server-Side is going to progress steadily in the coming months.

by Mickaël Rémond at November 13, 2019 20:44

SwiftNIO: Introduction to Channels, ChannelHandlers and Pipelines

Let’s keep on exploring the concepts behind SwiftNIO by playing with Channels, ChannelHandlers and ChannelPipelines.

This article was originally published in Mastering SwiftNIO, a new book exploring practical implementations of SwiftNIO. If you are new to SwiftNIO, you may want to first checkout my previous article on SwiftNIO Futures and Promises.

What are SwiftNIO channels?

Channels are at the heart of SwiftNIO. They are responsible for many things in SwiftNIO:

  1. Thread-safety. A channel is associated for its lifetime to an EventLoop. All events processed for that channel are guaranteed to be triggered by SwiftNIO framework in the same EventLoop. It means that the code you provide for a given channel is thread-safe (as long as you respect a few principles when adding your custom code). It also means that the ordering of the events happening on a given channel is guaranteed. SwiftNIO let you focus on the business logic, handling the concurrency by design.
  2. Abstraction layer between application and transport. A channel his keeping the link with the underlying transport. For example, a SocketChannel, used in TCP/IP clients or servers, is keeping the link to its associated TCP/IP socket. It means that each new TCP/IP connection will get their own channel. In SwiftNIO, developers are dealing with channels, a high level abstraction, not directly with sockets. The channel itself takes care of the interactions by addressing the underlying socket.
  3. Applying the protocol workflow through dynamic pipelines. A channel coordinates its events and data flow , through an associated ChannelPipeline, containing ChannelHandlers.

At this stage, the central role of channel may seem quite difficult to understand, but you will get a more concrete view as we progress through our example.

Step 1: Bootstrapping your client or server with templates

Before we can play with channels, pipelines and handlers, we need to setup the structure of our networking application.

Thus, the first step, when you need to build a client library or a server framework, is to start by setting up the “master” Channel and tying it to an EventLoopGroup.

That task can be tedious and error prone, that’s why the SwiftNIO project provides Bootstrap helpers for common use cases. It offers, for example:

  • A ClientBootstrap to setup TCP/IP clients.
  • A ServerBootstrap to setup TCP/IP servers.
  • A DatagramBootstrap to setup UDP clients or servers.

Setting up the connection

Here is the minimal client setup:

// 1
// Creating a single-threaded EventLoop group is enough
// for a client.
let evGroup = MultiThreadedEventLoopGroup(numberOfThreads: 1)
defer {
    try! evGroup.syncShutdownGracefully()
}

// 2
// The basic component to help you write a TCP client is ClientBootstrap. You
// also have a serverBootstrap to set up a default TCP server for you.
let bootstrap = ClientBootstrap(group: evGroup)
    .channelOption(ChannelOptions.socket(SocketOptionLevel(SOL_SOCKET), SO_REUSEADDR), value: 1)

do {
    // 3
    // Connect to the server
    let channel = try bootstrap.connect(host: "towel.blinkenlights.nl", port: 23).wait()    
} catch let err {
    print(err)
}

As you can see, we setup the client using three major steps:
1. Create the EventLoopGroup.
2. Create the ClientBootstrap.
3. Connect to the server in a synchronous way, here on remote server on port 23 (telnet).

In SwiftNIO, the ClientBootstrap connect(host:port:) method does more than just triggering a TCP/IP connection. It also “bootstrap” it by setting up the channel, the socket parameters, the link to the channel event loop, and performs several other housekeeping operations.

Note on Threads & Blocking Operations:

In our example, the TCP/IP connection establishment is synchronous: We wait for the TCP/IP connection to be full active.

In a real client, for example an iOS mobile client, we would just use Channel future as returned by connect(host:port) method, to avoid blocking the main UI thread.

Handling errors

The final part of the code is handling errors: as the connection can fail, we catch the possible errors to display them.

In our example, as we are connecting to a famous public “telnet” server (towel.blinkenlights.nl), the connection should work even for you if the network is available.

If you are connecting to localhost instead, where you likely have no telnet server running (you should not), the connection will fail with the following error:

NIOConnectionError(host: "localhost", port: 23, dnsAError: nil, dnsAAAAError: nil, connectionErrors: [NIO.SingleConnectionFailure(target: [IPv6]localhost/::1:23, error: connection reset (error set): Connection refused (errno: 61)), NIO.SingleConnectionFailure(target: [IPv4]localhost/127.0.0.1:23, error: connection reset (error set): Connection refused (errno: 61))])

As you can see, SwiftNIO errors are very precise. Here we clearly see that the connection was refused:

Connection refused (errno: 61)

But if the DNS resolution fails because the host does not exist (for example using localhost2), you would also get a different and relevant error:

NIOConnectionError(host: "localhost2", port: 23, dnsAError: Optional(NIO.SocketAddressError.unknown(host: "localhost2", port: 23)), dnsAAAAError: Optional(NIO.SocketAddressError.unknown(host: "localhost2", port: 23)), connectionErrors: [])

Step 2: Defining your first ChannelInboundHandler

In the current state, the code is of little help. It just opens a TCP/IP connection on the target server, but does not do anything more.

To be able to receive connection events and data, you need associate ChannelHandlers with your Channels.

You have two types of channelHandler available, defined as protocols:

  • The ChannelInboundHandlers are used to process incoming events and data.
  • The ChannelOutboundHandlers are used to process outgoing events and data.

A channel handler can implement either inbound or outbound ChannelHandler protocol or both.

Methods in the protocol are optional, as SwiftNIO provides default implementations. However, you need to properly set up the required type aliases InboundIn and OutboundOut for your handler to work. Generally, you will use SwiftNIO’s ByteBuffer to convey the data at the lowest level. ByteBuffer is an efficient copy-on-write binary buffer. However, you can write handlers that are intended to work at high-level and can transform the data to more protocol-specific, ready to use data types. These types of handler are called “codec” and are responsible for data coding / decoding.

For an inbound channel handler, you have a set of available methods you can implement to process events. Here is a few of them:

  • ChannelActive(context:): Called when the Channel has become active, and is able to send and receive data. In our TCP/IP example, this method is called when the connection is established. You can use that method to perform post-connect operations, like sending the initial data required to open your session.
  • ChannelRead(context:data:): Called when some data has been read from the remote peer. This is called for each set of data that are received over the connection. Note that the data may be split in several calls to that method.
  • ChannelReadComplete(context): Called when the Channel has completed its current read loop.
  • ChannelInactive(context:): Called when the Channel has become inactive and is no longer able to send and receive data. In our TCP/IP example, this method is triggered after the connection has been closed.
  • errorCaught(context:error:): Called when an error happens while receiving data or if an error was encountered in a previous inbound step. This can be called for example when the TCP/IP connection has been lost.

The context parameter receives a ChannelHandlerContext instance. It lets you access important properties, like the channel itself, so that you can for example write back data, going through the outbound sequence of handlers. It contains important helpers that you need to access to write your networking code.

Let’s show a simple InboundChannelHandler implementing only a few methods. In the following code, the handler prints and logs some connection events as they happen (client is connected, client is disconnected, an error occurred):

private final class PrintHandler: ChannelInboundHandler {
    typealias InboundIn = ByteBuffer

    func channelActive(context: ChannelHandlerContext) {
        print("Client is connected to server")
    }

    func errorCaught(context: ChannelHandlerContext, error: Error) {
        print("Channel error: \(error)")
    }

    func channelInactive(context: ChannelHandlerContext) {
        print("Client is disconnected ")
    }
}

The ChannelActive and ChannelInactive methods are called when the connection has been established or closed. The errorCaught method will print any error that occurs during the session.

We will learn more about how the handlers are called in the next section, when talking about the channel’s pipeline.

Step 3: Setting up your channel pipeline

To be able to receive data from the server, you need to add at least one channelHandler to your channelPipeline. You do so by attaching a piece of code to run on each new channel, the channelInitializer. The initializer is called to setup every new channel. That’s the place where you are typically going to define your ChannelPipeline.

What is the ChannelPipeline?

The pipeline organize the sequence of Inbound and Outbound handlers as a chain:

In each handler, you can decide what you want to do with the data you received. You can buffer them, transform them, decide to pass further down in the pipeline chain, etc. An inbound handler can directly decide to react to raw data and post back some data in the outbound channel. As events are processed and refined while progressing through the pipeline, the pipeline and the ChannelHandlers are good way to organise your application to clearly split the networking code from the business logic.

Even though the previous diagram shows for clarity the ChannelInboundHandler and ChannelOutboundHandler instances as separate chains, they are actually part of the same pipeline chain. They are represented as two separate paths, as inbound handlers are only called for inbound event and outbound handlers are only triggered on outbound events. However, a channel has a single pipeline at any given time. The numbers in the diagram show each handler position in the pipeline list.

In other words, when an event is propagated, only the handler that can handle it are triggered. The ChannelInboundHandlers are triggered in order when receiving data for example and the ChannelOutboundHandlers are triggered in pipeline reversed order to send data, as shown on the first diagram.

It means that if a ChannelInboundHandler decides to write something back to the Channel, using its context, the data will skip the ChannelInboundHandler chain and directly get through all ChannelOutboundHandler instances that are located earlier in the ChannelPipeline than the ChannelInboundHandler writing the data. The following diagram shows the data flow in that situation:

Pipeline setup

To setup a your pipeline, you can use the method addHandler(handler:name:position:) on the channel pipeline object. The addHandler method can be called from anywhere, from any thread, so to enforce thread-safety it returns a future. To add several handlers in a row, you can use the future flatmap() or then() methods to chain the addHandler calls or you can prefer the addHandlers(handlers:position:) method.

As channel pipeline are dynamic, you can also remove handlers with the method removeHandler(name:).

For a server, most of the pipeline setup would be done on child channel’s handlers, not on the main server channel. That way, the pipeline handler is attached to each newly connected client channel.

Let’s see in step 4 how to process incoming data through a one-handler pipeline.

Step 4: Opening a data stream & processing it

Blinkenlights server

To demonstrate data reception, we will be using a famous public server, whose role is simply to send data over a simple TCP/IP connection.

The data are just “pages” of text, with ANSI terminal codes to reset the page display and print them “in place”. Using that trick, that server is playing a ASCII art animated version of * Star Wars, Episode IV*, recreated with manual layout.

Even if you do not run the code from an ANSI compliant terminal, you should be able to see all the pages printed at the bottom of your log file and get of feel of the animation.

Updating our handler code

We are going to add two new methods in our handler:

  • ChannelRead(context:data:): As our “protocol” is very basic and sending frames to display on a regular basis, we can just accumulate the data in a ByteBuffer. Our will convert the data to a buffer, using self.unwrapInboundIn(data) method and add it in a temporary buffer.
  • ChannelReadComplete(context): In our example, as we are reading frames, we will be using that method to actually display the data we have previously buffered. We assume that when we have no data available to read, we have read the full frame. We then print the content of our temporary buffer to the terminal at once and empty the buffer.

We also modify the ChannelActive(context:) method to allocate and set up our temporary ByteBuffer. You can reuse the channel allocator from the context to allocate your buffer.

Here is the code of our PrintHandler:

private final class PrintHandler: ChannelInboundHandler {
    typealias InboundIn = ByteBuffer

    var buffer: ByteBuffer?

    func channelActive(context: ChannelHandlerContext) {
        buffer = context.channel.allocator.buffer(capacity: 2000)
        print("Client is connected to server")
    }

    func channelRead(context: ChannelHandlerContext, data: NIOAny) {
        var byteBuffer = self.unwrapInboundIn(data)
        buffer?.writeBuffer(&byteBuffer)

    }

    func channelReadComplete(context: ChannelHandlerContext) {
        if let length = buffer?.readableBytes {
            if let str = buffer?.readString(length: length) {
                print(str)
            }
        }
        buffer?.clear()
    }

    func errorCaught(context: ChannelHandlerContext, error: Error) {
        print("Channel error: \(error)")
    }

    func channelInactive(context: ChannelHandlerContext) {
        print("Client is disconnected ")
    }
}

Note that when reading the data, they are converted to our InboundIn typealias (in that case a ByteBuffer), using unwrapInboundIn() method. There are several provided unwrappers (i.e. to ByteBuffer or FileRegion), but you can also create custom ones.

The overall SwiftNIO code setup is very simple:

let evGroup = MultiThreadedEventLoopGroup(numberOfThreads: 1)
defer {
    try! evGroup.syncShutdownGracefully()
}

let bootstrap = ClientBootstrap(group: evGroup)
    .channelOption(ChannelOptions.socket(SocketOptionLevel(SOL_SOCKET), SO_REUSEADDR), value: 1)
    .channelInitializer { channel in
        channel.pipeline.addHandler(PrintHandler())
        }

// Once the Bootstrap client is setup, we can connect
do {
    _ = try bootstrap.connect(host: "towel.blinkenlights.nl", port: 23).wait()
} catch let err {
    print("Connection error: \(err)")
}

// Wait for return before quitting
_ = readLine()

The main change with previous setup is that we have been adding a channelInitializer, in charge of setting up the channel pipeline with our PrintHandler.

Note: A pipeline is Dynamic

What you need to keep in mind about the channel pipeline is that it can change during the lifetime of the channel. The channelInitializer will be called to setup an initial pipeline. However, you can change it at any time during the life of the channel.

Many protocol implementations are using this feature to model protocol state switching during the communication between a client and server.

Step 5: Running the code

You can check the full example code from Github: Blinkenlitghts.

Build and run it with:

swift build
swift run

So, finally, when you run the SwiftNIO console application from your terminal, you should be able to see an ASCII art Star Wars, Episode IV story:

Conclusion

This example is very simple but already give you a glimpse at how you are going to organise a SwiftNIO client or server.

With your new knowledge on channel handlers and pipelines, you should be able to understand simple client / server examples, like the echoClient and echoServer examples in SwiftNIO repository.

Channels, Handlers and Pipelines are really at the heart of SwiftNIO architecture. There is a lot more to learn about handler and pipeline, such as handlers implementing protocol coder / decoder (codecs). We will dig into more advanced topics in a next article and in my “Mastering SwiftNIO” book.

In a next post, we will show how to use multiple handlers in the pipeline to process raw data, process data through codec and pass the resulting info to your higher level application handlers.

In the meantime, you should already have a lot to play with.

Please, do not hesitate to ask for questions and share this article, if you liked it.

Photo by chuttersnap on Unsplash

by Mickaël Rémond at November 13, 2019 17:12

SwiftNIO: Understanding Futures and Promises

SwiftNIO is Apple non-blocking networking library. It can be used to write either client libraries or server frameworks and works on macOS, iOS and Linux.

It is built by some of the Netty team members. It is a port of Netty, a high performance networking framework written in Java and adapted in Swift. SwiftNIO thus reuses years of experience designing a proven framework.

If you want to understand in depth how SwiftNIO works, you first have to understand underlying concept. I will start in this article by explaining the concept of futures and promises. The ‘future’ concept is available in many languages, including Javascript and C#, under the name async / await, or in Java and Scala, under the name ‘future’.

Futures and promises

Futures and promises are a set of programming abstractions to write asynchronous code. The principle is quite simple: Your asynchronous code will return a promise instead of the final result. The code calling your asynchronous function is not blocked and can do other operations before it finally decides to block and wait for the result, if / when it really needs to.

Even if the words ‘futures’ and ‘promises’ are often use interchangeably, there is a slight difference in meaning. They represent different points of view on the same value placeholder. As explained in Wikipedia page:

A future is a read-only placeholder view of a variable, while a promise is a writable, single assignment container which sets the value of the future.

In other words, the future is what the client code receives and can use as a handler to access a future value when it has been defined. The promise is the handler the asynchronous code will keep to write the value when it is ready and thus fulfill the promise by returning the future value.

Let’s see in practice how futures and promises work.

SwiftNIO comes with a built-in futures and promises library. The code lies in EventLoopFuture. Don’t be fooled by the name: It is a full-featured ‘future’ library that you can use in your code to handle asynchronous operations.

Let’s see how you can use it to write asynchronous code, without specific reference to SwiftNIO-oriented networking operations.

Note: The examples in this blog post should work both on macOS and Linux.

Anatomy of SwiftNIO future / promise implementation

Step 1: Create an EventLoopGroup

The basic skeleton for our example is as follow:

import NIO

let evGroup = MultiThreadedEventLoopGroup(numberOfThreads: System.coreCount)

// Do things

try evGroup.syncShutdownGracefully()

We create an EventLoopGroup and shut it down gracefully at the end. A graceful shutdown means it will properly terminate the asynchronous jobs being executed.

An EventLoopGroup can be seen as a provider of an execution context for your asynchronous code. You can ask the EventLoopGroup for an execution context: an EventLoop. Basically, each execution context, each EventLoop is a thread. EventLoops are used to provide an environment to run your your concurrent code.

In the previous example, we create as many threads as we have cores on our computer (System.coreCount), but the number of threads could be as low as 1.

Step 2: Getting an EventLoop to execute your promise

In SwiftNIO, you cannot model concurrent execution without at least an event loop. For more info on what I mean by concurrency, you can watch Rob Pike excellent talk: Concurrency is not parallelism.

To execute your asynchronous code, you need to ask the EventLoopGroup for an EventLoop. You can use the method next() to get a new EventLoop, in a round-robin fashion.

The following code gets 10 event loops, using the next() method and prints the event loops information.

import NIO

print("System cores: \(System.coreCount)\n")
let evGroup = MultiThreadedEventLoopGroup(numberOfThreads: System.coreCount)

for _ in 1...10 {
    let ev = evGroup.next()
    print(ev)
}

// Do things

try evGroup.syncShutdownGracefully()

On my system, with 8 cores, I get the following result:

System cores: 8

SelectableEventLoop { selector = Selector { descriptor = 3 }, scheduledTasks = PriorityQueue(count: 0): [] }
SelectableEventLoop { selector = Selector { descriptor = 4 }, scheduledTasks = PriorityQueue(count: 0): [] }
SelectableEventLoop { selector = Selector { descriptor = 5 }, scheduledTasks = PriorityQueue(count: 0): [] }
SelectableEventLoop { selector = Selector { descriptor = 6 }, scheduledTasks = PriorityQueue(count: 0): [] }
SelectableEventLoop { selector = Selector { descriptor = 7 }, scheduledTasks = PriorityQueue(count: 0): [] }
SelectableEventLoop { selector = Selector { descriptor = 8 }, scheduledTasks = PriorityQueue(count: 0): [] }
SelectableEventLoop { selector = Selector { descriptor = 9 }, scheduledTasks = PriorityQueue(count: 0): [] }
SelectableEventLoop { selector = Selector { descriptor = 10 }, scheduledTasks = PriorityQueue(count: 0): [] }
SelectableEventLoop { selector = Selector { descriptor = 3 }, scheduledTasks = PriorityQueue(count: 0): [] }
SelectableEventLoop { selector = Selector { descriptor = 4 }, scheduledTasks = PriorityQueue(count: 0): [] }

The description represents the id of the EventLoop. As you can see, you can use 8 different loops before being assigned again an existing EventLoop from the same group. As expected, this matches our number of cores.

Note: Under the hood, most EventLoops are designed using NIOThread, so that the implementation can be cross-platform: NIO threads are build using Posix Threads. However, some platform specific loops, like NIO Transport service, are free from multiplatform constrains and are using Apple Dispatch library. It means, if you are targeting only MacOS, you can thus use SwiftNIO futures and promises directly with Dispatch library. Libdispatch being shipped with Swift on Linux now, it could also work there, but I did not test it yet.

Step 3: Executing async code

If you just want to execute async code without needing to wait back for a result, you can just pass a function closure to the EventLoop.execute(_:):

import NIO

print("System cores: \(System.coreCount)\n")
let evGroup = MultiThreadedEventLoopGroup(numberOfThreads: System.coreCount)

let ev = evGroup.next()

ev.execute {
    print("Hello, ")
}
// sleep(1)
print("world!")

try evGroup.syncShutdownGracefully()

In the previous code, the order in which “Hello, ” and “world!” are displayed is undetermined.

Still, on my computer, it is clear that they are not executed in order. The print-out in the execute block is run asynchronously, after the execution of the print-out in the main thread:

System cores: 8

world!
Hello, 

You can uncomment the sleep(1) function call to insert one second of delay before the second print-out instruction. It will “force” the ordering by delaying the main thread print-out and have “Hello, world!” be displayed in sequence.

Step 4: Waiting for async code execution

Adding timers in your code to order code execution is a very bad practice. If you want to wait for the async code execution, that’s where ‘futures’ and ‘promises’ comes into play.

The following code will submit an async code to run on an EventLoop. The asyncPrint function will wait for a given delay in the EventLoop and then print the passed string.

When you call asyncPrint, you get a promise in return. With that promise, you can call the method wait() on it, to wait for the completion of the async code.

import NIO

// Async code
func asyncPrint(on ev: EventLoop, delayInSecond: Int, string: String) -> EventLoopFuture<Void> {
    // Do the async work
    let promise = ev.submit {
        sleepAndPrint(delayInSecond: 1, string: string)
        return
    }

    // Return the promise
    return promise
}

func sleepAndPrint(delayInSecond: UInt32, string: String) {
    sleep(delayInSecond)
    print(string)
}

// ===========================
// Main program

print("System cores: \(System.coreCount)\n")
let evGroup = MultiThreadedEventLoopGroup(numberOfThreads: System.coreCount)

let ev = evGroup.next()

let future = asyncPrint(on: ev, delayInSecond: 1, string: "Hello, ")

print("Waiting...")
try future.wait()

print("world!")

try evGroup.syncShutdownGracefully()

The print-out will pause for one second on the “Waiting…” message and then display the “Hello, ” and “world!” messages in order.

Step 5: Promises and futures result

When you need a result, you need to return a promise that will give you more than just a signaling letting you know the processing is done. Thus, it will not be a promise of a Void result, but can return a more complex promise.

First, let’s see a promise of a simple result that cannot fail. In your async code, you can return a promise that will return the result of factorial calculation asynchronously. Your code will promise to return a Double and then submit the job to the EventLoop.

import NIO

// Async code
func asyncFactorial(on ev: EventLoop, n: Double) -> EventLoopFuture<Double> {
    // Do the async work
    let promise = ev.submit { () -> Double in
        return factorial(n: n)
    }

    // Return the promise
    return promise
}

// I would use a BigInt library to go further small number factorial calculation
// but I do not want to introduce an external dependency.
func factorial(n: Double) -> Double {
    if n >= 0 {
        return n == 0 ? 1 : n * factorial(n: n - 1)
    } else {
        return 0 / 0
    }
}

// ===========================
// Main program

print("System cores: \(System.coreCount)\n")
let evGroup = MultiThreadedEventLoopGroup(numberOfThreads: System.coreCount)

let ev = evGroup.next()

let n: Double = 10
let future = asyncFactorial(on: ev, n: n)

print("Waiting...")

let result = try future.wait()

print("fact(\(n)) = \(result)")

try evGroup.syncShutdownGracefully()

The code will be executed asynchronously and the wait() method will return the result:

System cores: 8

Waiting...
fact(10.0) = 3628800.0

Step 6: Success and error processing

If you are doing network operations, like downloading a web page for example, the operation can fail. You can thus handle more complex result, that can be either success or error. SwiftNIO offers a ready made type call ResultType.

In the next example, we will show an async function performing an asynchronous network operation using callbacks and returning a future result of ResultType. The ResultType will wrap either the content of the downloaded page or a failure callback.

import NIO
import Foundation

// =============================================================================
// MARK: Helpers

struct CustomError: LocalizedError, CustomStringConvertible {
    var title: String
    var code: Int
    var description: String { errorDescription() }

    init(title: String?, code: Int) {
        self.title = title ?? "Error"
        self.code = code
    }

    func errorDescription() -> String {
        "\(title) (\(code))"
    }
}

// MARK: Async code
func asyncDownload(on ev: EventLoop, urlString: String) -> EventLoopFuture<String> {
    // Prepare the promise
    let promise = ev.makePromise(of: String.self)

    // Do the async work
    let url = URL(string: urlString)!

    let task = URLSession.shared.dataTask(with: url) { data, response, error in
        print("Task done")
        if let error = error {
            promise.fail(error)
            return
        }
        if let httpResponse = response as? HTTPURLResponse {
            if (200...299).contains(httpResponse.statusCode) {
                if let mimeType = httpResponse.mimeType, mimeType == "text/html",
                    let data = data,
                    let string = String(data: data, encoding: .utf8) {
                    promise.succeed(string)
                    return
                }
            } else {
                // TODO: Analyse response for better error handling
                let httpError = CustomError(title: "HTTP error", code: httpResponse.statusCode)
                promise.fail(httpError)
                return
            }
        }
        let err = CustomError(title: "no or invalid data returned", code: 0)
        promise.fail(err)
    }
    task.resume()

    // Return the promise of a future result
    return promise.futureResult
}

// =============================================================================
// MARK: Main

print("System cores: \(System.coreCount)\n")
let evGroup = MultiThreadedEventLoopGroup(numberOfThreads: System.coreCount)

let ev = evGroup.next()

print("Waiting...")

let future = asyncDownload(on: ev, urlString: "https://www.process-one.net/en/")
future.whenSuccess { page in
    print("Page received")
}
future.whenFailure { error in
    print("Error: \(error)")
}

// Timeout: As processing is async, we can handle timeout by just waiting in
// main thread before quitting.
// => Waiting 10 seconds for completion
sleep(10)

try evGroup.syncShutdownGracefully()

The previous code will either print “Page received” when the page is downloaded or print the error. As your success handler receives the page content itself, you could do something with it (print it, analyse it, etc.)

Step 7: Combining async work results

Where promises really shine is when you would like to chain several async calls that depend on each other. You can thus write a code that appear logically in a sequence, but that is actually asynchronous.

In the following code, we reuse the previous async download function and process several pages by counting the number of div elements in all pages.

By wrapping this processing in a reduce function, we can download all web pages in parallel. We receive the page data as they are downloaded and we keep track of a counter of the number of div per page. Finally, we return the total as the future result.

This is a more involved example that should give you a better taste of what developing with futures and promises looks like.

import NIO
import Foundation

// =============================================================================
// MARK: Helpers

struct CustomError: LocalizedError, CustomStringConvertible {
    var title: String
    var code: Int
    var description: String { errorDescription() }

    init(title: String?, code: Int) {
        self.title = title ?? "Error"
        self.code = code
    }

    func errorDescription() -> String {
        "\(title) (\(code))"
    }
}

// MARK: Async code
func asyncDownload(on ev: EventLoop, urlString: String) -> EventLoopFuture<String> {
    // Prepare the promise
    let promise = ev.makePromise(of: String.self)

    // Do the async work
    let url = URL(string: urlString)!

    let task = URLSession.shared.dataTask(with: url) { data, response, error in
        print("Loading \(url)")
        if let error = error {
            promise.fail(error)
            return
        }
        if let httpResponse = response as? HTTPURLResponse {
            if (200...299).contains(httpResponse.statusCode) {
                if let mimeType = httpResponse.mimeType, mimeType == "text/html",
                    let data = data,
                    let string = String(data: data, encoding: .utf8) {
                    promise.succeed(string)
                    return
                }
            } else {
                // TODO: Analyse response for better error handling
                let httpError = CustomError(title: "HTTP error", code: httpResponse.statusCode)
                promise.fail(httpError)
                return
            }
        }
        let err = CustomError(title: "no or invalid data returned", code: 0)
        promise.fail(err)
    }
    task.resume()

    // Return the promise of a future result
    return promise.futureResult
}

// =============================================================================
// MARK: Main

print("System cores: \(System.coreCount)\n")
let evGroup = MultiThreadedEventLoopGroup(numberOfThreads: System.coreCount)

var futures: [EventLoopFuture<String>] = []

for url in ["https://www.process-one.net/en/", "https://www.remond.im", "https://swift.org"] {
    let ev = evGroup.next()
    let future = asyncDownload(on: ev, urlString: url)
    futures.append(future)
}


let futureResult = EventLoopFuture.reduce(0, futures, on: evGroup.next()) { (count: Int, page: String) -> Int in
    let tok =  page.components(separatedBy:"<div")
    let p_count = tok.count-1
    return count + p_count
}

futureResult.whenSuccess { count in
    print("Result = \(count)")
}
futureResult.whenFailure { error in
    print("Error: \(error)")
}

// Timeout: As processing is async, we can handle timeout by just waiting in
// main thread before quitting.
// => Waiting 10 seconds for completion
sleep(10)

try evGroup.syncShutdownGracefully()

This code actually builds a pipeline as follows:

Conclusion

Futures and promises are at the heart of SwiftNIO design. To better understand SwiftNIO architecture, you need to understand the futures and promises mechanism.

However, there is more concepts that you need to master to fully understand SwiftNIO. Most notably, inbound and outbound channels allow you to structure your networking code into reusable components executed in a pipeline.

I will cover more SwiftNIO concepts in a next blog post. In the meantime, please send us your feedback :)

by Mickaël Rémond at November 13, 2019 17:12

Swift Server-Side Conference 2019 Highlights: Workshop & Day 1

Swift is mostly known nowadays as the main programming language you can use to develop on Apple devices. However Swift, being Open Source, has a small community of dedicated people that have started to work on building an ecosystem to make Swift development on the server-side a viable option.

Swift Server-Side is a fairly new conference dedicated to running Swift applications on the server. In practice, many people from the Swift server-side ecosystem attend this conference. I attended the second edition of this conference, held from Oct 30 to Nov 1, 2019 in Copenhagen.

Overall Impressions

I had missed the first edition last year in Berlin, but this second edition was a nice opportunity to meet with the community. Some people are still a bit reluctant to bet on Swift on Linux due to the fear of Apple being too prominent in the development of the server ecosystem.

However, this is not a worry, if I should judge from the mindset of the community. The crowd was extremely involved in the Swift server-side environment, coming from various companies. People came here not to listen to Apple directions. They got there because they are passionate about Swift and feel it is a very good fit on the server, a good middle ground between Rust and Go.

The developers there are well aware of the current weakness of Swift on the server, and are not waiting for a large company to solve them. The community created the Swift Server Work Group (SSWG), is working on sharing common pieces of code between frameworks to avoid redundant work. The SSWG has a plan and the community is building piece by piece the missing parts in the ecosystem, from Open Foundation improvements (the standard library) to various types of drivers and libraries.

What I have seen at work is a vibrant, highly knowledgeable community that has a plan to get to the point where Swift on the server is a solid choice for developers.

Workshops

Day one was dedicated to workshops. I attended two of them and enjoyed working directly with the main developers of each project.

Contributing to SwiftNIO and SSWG

I enjoyed being guided by Johannes Weiss & Cory Benfield on how to contribute to the Swift Server Work Group and on SwiftNIO. They have been happy with the result of their workshop, with 30 pull requests for SwiftNIO to process during the following days.

Build a cloud-native app with Kitura

This workshop was an introduction to Kitura, guided by its lead developer Ian Partridge. It was a nice way to get into Kitura, and see the benefit of using Swift on three projects developed in parallel: A Swift service running on Linux, a MacOS admin dashboard and an iOS client.

You can follow the material of the workshop on your own, using this repository: Kitura SOS Workshop

Conference Day 1

Swift Server Work Group Update (Logan Wright)

It was a nice summary of the progress of the Swift Server Work Group and showing the road ahead for framework and library developers. The bottom line is that the number of committers on all Swift server and library project is growing. By coordinating the effort and by sharing code, the community is hoping to reach the point where developers have access to all the library they need to build their applications. Beside a common network framework (SwiftNIO), we now have Metrics and Logging initiative, as well as several database drivers, part of the SSWG effort.

My personal point of view is that the ecosystem development will accelerate. As Apple has now added support for Swift Package in Xcode, it is possible to write libraries that will work both on iOS, MacOS and Linux. This is going to grow the package set even faster.

You can learn more about the progress here: SSWG Annual Update

Resilient Micro-Services with Vapor (Caleb Kleveter)

Caleb Kleveter did a good job at explaining the best practices to build Micro-Services in Swift, inspired notably by his work on SwiftCommerce.

Some advice applies to micro-services in general, not only Swift ones, especially regarding “external” resilience:

General abstraction rules are more specific to Vapor:

Static site generation in Swift (John Sundell)

John Sundell introduced how he wrote tools to generate static HTML for his various websites (WWDC by Sundell and Swift by Sundell).

He announced that his static site generation tools — Ink, Plot and Publish — will be open sourced by the end of the year. You can follow him on Github to check when the code will be released: github.com/JohnSundell 

API Performance: A macro talk about micro decisions (Joannis Orlandos)

Joannis Orlandos gave us some food for thoughts about API performance, from the different point of view:
– For users: performance is often response time
– For developers: it is more often requests per second
– For sysadmin: it is CPU and memory footprint
– For management: it is development time and hosting cost

SwiftNIO solves most aspects of the server-side performance, with development time being often addressed by frameworks on top of SwiftNIO.

He also shared practical tips, like for example to avoid auto-incremental ID in database, in favor of UUID, to limit locks on the increment code.

Cloud-native Swift micro-services (Ian Partridge)

Ian Partridge did a very good job at presenting the progress of Kitura itself and the set of tools around the Kitura framework. Most notably he mentioned:
– The release of Kitura 2.9
– The SwiftKafka library (a wrapper around …)
– The Hystrix library (not a Swift project directly, but useful to use in micro-service architecture)
Circuit breaker

Finally, he introduced Appsody, a tool to help quickly bootstrap Docker / Kubernetes based microservices. It is a tool that can be used not exclusively for Swift and can help bootstrap Java, Javascript, Rust or Python web services.

Breaking into tech (Heidi Hermann)

Heidi shared her experience and her view on the tech community and gave advice on how we can improve it to be more welcoming. It was a really great talk and she properly demonstrated that, as a whole, we are failing at training and helping people make progress, and bring other views, opinions and backgrounds into our companies.

As she said, most of the tech companies these days are only hiring senior developers. They consider the pressure to deliver is higher than the need to prepare the future. However, when stated this way, it is clear it is not sustainable, especially, as there is a shortage of experienced developers. We all need to work to build the tech community of tomorrow.

Building State Machines in Swift (Cory Benfield)

It was another fantastic talk. Coming from an Erlang background, where state machines are a first-class process type (see gen_fsm and now gen_statem), I really enjoyed Cory’s take on the Swift-friendly approach. Thanks to enum and type checking, Swift help you write robust and safe state machines.

He also shared a lot of nice design tips, showing how to properly encapsulate the states as enums to prevent users of your State machine to mess with it.

Swift Development on Linux (Jonas Schwartz)

Finally, Jonas Schwartz shared his setup and tips to develop in Swift on Linux. While it is clearly a bit rougher than using a Mac to develop in Swift, he showed that it is definitely possible. Once you are set, it can be even be an enjoyable experience.

Server-side Swift Panel

The panel concluded the day with a lucid overview on what is currently working well with Swift Server-Side, like the fact it is already production ready, or the coordination effort of the community being very good, but also covered the missing pieces, like missing integration of async/await in Swift language at the moment and needed improvement on Foundation on Linux.

and Day 2 …

In a next blog post, I will cover day 2 of the Server-Side Swift conference and share my conclusions.

In the meantime, do not hesitate to share your questions, concerns or feedback about Server-Side Swift.

by Mickaël Rémond at November 13, 2019 17:11

Understanding ejabberd OAuth Support & Roadmap

Login and password authentication is still the most commonly used auth mechanism on XMPP services. However, it is causing security concerns because it requires to store the credentials on the client app in order to login again without asking for the password.

Mobile APIs on iOS and Android can let you encrypt the data at REST, but still, it is best not to rely on storing any password at all.

Fortunately, several solutions exist – all supported by ejabberd. You can either use OAuth or Certificate-based authentication. Client certificate management being still quite a tricky issue, I will focus in this post on explaining how to set up and use ejabberd OAuth support.

Understanding ejabberd OAuth Support

The principle of OAuth is simple: OAuth offers a mechanism to let your users generate a token to connect to your service. The client can just keep that token to authenticate and is not required to store the password for subsequent authentications.

Implicit grant

As of ejabberd 19.09, ejabberd supports only the OAuth implicit grant. Implicit grant is often used to let third-party clients — clients you do not control — connect to your server.

The implicit grant requires redirecting the client to a web page, so the client does not even see the login and password of the user. Indeed, as you cannot trust third-party clients, this is the sane thing to do to keep your users’ passwords for being typed directly in any third-party client. You can never be sure that the client will not store it (locally, or worse, in the cloud).

With the implicit grant, the client app directs the user to the sign-in page on your server to authenticate and get the token, often with login and password (but the mechanism can be different and could involve 2FA, for example). Your website then uses a redirect URL that will be passed back to the client, containing the token to use for logging in. The redirect happens usually using client-registered domain or custom URL scheme.

… and password grant

The implicit grant workflow is not ideal if your ejabberd service is only useable with your own client. Using web view redirects can feel cumbersome in your onboarding workflow. As you trust the client, you probably would like to be able to directly call an API with the login and password, get the OAuth token back, and forget about the password. The user experience will be more pleasant and feel more native.

This flow is known in OAuth as the OAuth password grant.

In the upcoming ejabberd version, you will be able to use OAuth password grant as an addition to the implicit grant. The beta feature is already in ejabberd master branch, so you have a good opportunity to try it and share your feedback.

Let’s use ejabberd OAuth Password grant in practice

Step 1: ejabberd configuration

To support OAuth2 in ejabberd, you can add the following directives in ejabberd config file:

# Default duration for generated tokens (in seconds)
# Here the default value is 30 days
oauth_expire: 2592000
# OAuth token generation is enabled for all server users
oauth_access: all
# Check that the client ID is registered
oauth_client_id_check: db

In your web HTTPS ejabberd handler, you also need to add the oauth request handler:

listen:
  # ...
  -
    port: 5443
    ip: "::"
    module: ejabberd_http
    tls: true
    request_handlers:
      # ...
      "/oauth": ejabberd_oauth

Note: I am using HTTPS, even for a demo, as it is mandatory to work on iOS. During the development phase, you should create your own CA to add a trusted development certificate to ejabberd. Read the following blog post if you need guidance on how to do that: Using a local development trusted CA on MacOS

You can download my full test config file here: ejabberd.yml

Step 2: Registering an OAuth client

If you produce a first party client, you can bypass the need for OAuth to redirect to your browser to get the token.

As you trust the application you are developing, you can let the user of your app directly enter the login and password inside your client. However, you should never store the password directly, only the OAuth tokens.

In ejabberd, I recommend you first configure an OAuth client, so that it can check that the client id is registered.

You can use the ejabberdctl command oauth_add_client_password, or use the Erlang command line.

Here is how to use ejabberdctl to register a first-party client:

ejabberdctl oauth_add_client_password <client_id> <client_name> <secret>

As the feature is still in development, you may find it easier to register your client directly using Erlang command-line. Parameters are client_id, client_name and a secret:

1> ejabberd_oauth:oauth_add_client_password(<<"client-id-Iegh7ooK">>, <<"Demo client">>, <<"3dc8b0885b3043c0e38aa2e1dc64">>).
{ok,[]}

Once you have registered a client, you can start generating OAuth tokens for your users from your client, using an HTTPS API.

Step 3: Generation a password grant token

You can use the standard OAuth2 password grant query to get a bearer token for a given user. You will need to pass the user JID and the password. You need to require the OAuth scope sasl_auth so that the token can be used to authentication directly in the XMPP flow.

Note: As you are passing the client secret as a parameter, you must use HTTPS in production for those queries.

Here is an example query to get a token using the password grant flow:

curl -i -POST 'https://localhost:5443/oauth/token' -d grant_type=password -d username=test@localhost -d password=test -d client_id=client-id-Iegh7ooK  -d client_secret=3dc8b0885b3043c0e38aa2e1dc64 -d scope=sasl_auth

HTTP/1.1 200 OK
Content-Length: 114
Content-Type: application/json;charset=UTF-8
Cache-Control: no-store
Pragma: no-cache

{"access_token":"DGV4JFzW15iZFmsnvzT7IymupTAYvo6U","token_type":"bearer","scope":"sasl_auth","expires_in":2592000}

As you can see, the token is a JSON string. You can easily extract the access_token from it. That’s the part you will use to authenticate on XMPP.

Step 4: Connecting on XMPP using an OAuth token

To authenticate over XMPP, you need to use the X-OAUTH2 mechanism. X-OAUTH2 was defined by Google for Google Talk and reused later by Facebook chat. You can find Google description here: XMPP OAuth 2.0 Authorization.

Basically, it encodes the JID and token as in the SASL PLAIN authorisation, but instead of passing the PLAIN keyword as mechanism, it uses X-OAUTH2. ejabberd will thus know that it has to check the secret against the token table in the database, instead of checking the credentials against the password table.

Quick demo

Next, let’s demonstrate the connection using Fluux Go XMPP library, which is the only library I know that supports OAuth tokens today.

Here is an example client login on XMPP with an OAuth2 token:

package main

import (
    "fmt"
    "log"
    "os"

    "gosrc.io/xmpp"
    "gosrc.io/xmpp/stanza"
)

func main() {
    config := xmpp.Config{
        Address:      "localhost:5222",
        Jid:          "test@localhost",
        Credential:   xmpp.OAuthToken("DGV4JFzW15iZFmsnvzT7IymupTAYvo6U"),
        StreamLogger: os.Stdout,
    }

    router := xmpp.NewRouter()
    router.HandleFunc("message", handleMessage)

    client, err := xmpp.NewClient(config, router)
    if err != nil {
        log.Fatalf("%+v", err)
    }

    // If you pass the client to a connection manager, it will handle the reconnect policy
    // for you automatically.
    cm := xmpp.NewStreamManager(client, nil)
    log.Fatal(cm.Run())
}

func handleMessage(s xmpp.Sender, p stanza.Packet) {
    msg, ok := p.(stanza.Message)
    if !ok {
        _, _ = fmt.Fprintf(os.Stdout, "Ignoring packet: %T\n", p)
        return
    }

    _, _ = fmt.Fprintf(os.Stdout, "Body = %s - from = %s\n", msg.Body, msg.From)
}

The important part for OAuth is that you are telling the library to use an OAuth2 token with the following value in the xmpp.Config struct:

xmpp.Config{
    // ...
        Credential:   xmpp.OAuthToken("DGV4JFzW15iZFmsnvzT7IymupTAYvo6U"),
        }

You can check the example in Fluux XMPP example directory: xmpp_oauth2.go

There is more

As I said, ejabberd OAuth support is not limited to generating password grant. Since ejabberd 15.09, we support implicit grant generation and it is still available. You can find more information in ejabberd documentation: OAuth

Moreover, there is more than XMPP authentication with OAuth 2. In the current development version, you can authenticate your devices on ejabberd MQTT service using MQTT 5.0 Enhanced Authentication. The authentication method to use is the same as for XMPP: We reuse the X-OAUTH2 method name. When trying to use this method, the server will confirm you are allowed to use that method and you can pass your token in return.

Please, note that you will need to use an MQTT 5.0 client library to use OAuth2 authentication with MQTT.

Conclusion

ejabberd OAuth XMPP and MQTT authentication is using the informal auth mechanism that was introduced by Google Talk and reused by Facebook. It does the job and fills an important security need.

That said, I would love to see more standard support from the XMPP Standard Foundation regarding OAuth authentication. For example, getting a specification translating OAuth authentication to XMPP flow would be of great help.

Still, in the meantime, I hope more libraries support that informal OAuth specification, so that client developers have good alternative to local password storage for subsequent authentications.

Please, give it a try from master and send us feedback if you want to help us shape the evolution of OAuth support in ejabberd.

… And let’s end password-oriented client authentication :)

by Mickaël Rémond at November 13, 2019 17:11

Writing a Custom Scroll View with SwiftUI in a chat application

When you are writing a chat application, you need to be able to have some control on the chat view. The chat view typically starts aligned at the end of the conversation, which is the bottom of the screen. When you have received more messages and they cannot fit on one screen anymore, you can scroll back to display them.

However, using only SwiftUI standard ScrollView to build such a conversation view is not possible in the first release of SwiftUI (as of Xcode 11), as no API is provided to define the content offset and start with the content at the bottom. It means that you would be stuck to displaying your chat window and the top and scroll down to see the new messages, which is not acceptable.

In this article, I will show you how to write a custom scroll view to get the intended behaviour. It will not yet be a fully-featured scroll view, with all the bells and whistles you expect (like, for example, a scroll bar), but it will be a good example showing what is required to build SwiftUI custom views. You can then build on that example to add the features you need.

Note: The code was tested on Xcode 11.0.

What is a scroll view?

A scroll view is a view that lets you see more content that can fit on the screen by dragging the content on the screen to display more.

From a technical point of view, a scroll view contains another view that is larger than the screen. It will then handle the “drag” events to synchronize the displayed part of the content view.

Custom SwiftUI scroll view principles

To create a custom SwiftUI view, you generally need to master two SwiftUI concepts:

  • GeometryReader: GeometryReader is a view wrapper that let child views access sizing info of their parent view.
  • Preferences: Preferences are used for the reverse operation. They can be used to propagate information from the child views the parent. They are usually attached to the parent by creating a view modifier.

Creating an example project

We will be creating an example project, with an example conversation file in JSON format to illustrate the view rendering.

Create a new project for iOS, and select the Single View App template:

Choose a name for the new project (i.e. SwiftUI-ScrollView-Demo) and make sure you select SwiftUI for User Interface:

You are ready to start your example project.

Creating a basic view with the conversation loaded

Create a Models group in SwiftUI-ScrollView-Demo group and create a Swift file name Conversation.swift in that group.

It will contain a minimal model to allow rendering a conversation and populate a demo conversation with test messages, to test our ability to put those messages in a scroll view.

//
//  Conversation.swift
//  SwiftUI-ScrollView-Demo
//
struct Conversation: Hashable, Codable {
    var messages: [Message] = []
}

struct Message: Hashable, Codable, Identifiable {
    public var id: Int
    let body: String
    // TODO: add more fields (from, to, timestamp, read indicators, etc).
}

// Create demo conversation to test our custom scroll view.
let demoConversation: Conversation = {
    var conversation = Conversation()
    for index in 0..<40 {
        var message = Message(id: index, body: "message \(index)")
        conversation.messages.append(message)
    }
    return conversation
}()

Preparing the BubbleView

In this article, the message BubbleView will not look like a chat bubble. It will just be a raw cell with a gray background.

Create a new SwiftUI file named BubbleView,swift in the SwiftUI-ScrollView-Demo group.

The content of the file is as follows:

//
//  BubbleView.swift
//  SwiftUI-ScrollView-Demo
//
import SwiftUI

struct BubbleView: View {
    var message: String

    var body: some View {
        HStack {
            Spacer()
            Text(message)
        }
        .padding(10)
        .background(Color.gray)
    }
}

struct BubbleView_Previews: PreviewProvider {
    static var previews: some View {
        BubbleView(message: "Hello")
            .previewLayout(.sizeThatFits)
    }
}

It renders a right-aligned text message, with padding and gray background.

With the custom preview layout, the canvas preview will only show you the content of that view, with the preview message “Hello”:

Working on the main conversation view

You can now edit the ContentView.swift file to render your custom scroll view.

First rename your ContentView to ConversationView using the new Xcode refactoring.

Then, you can prepare your list of messages in the conversation and render then in a VStack. We put that VStack in a standard scroll view, to be able to see all the messages by scrolling the Vstack inside the scroll view.

//
//  ConversationView.swift
//  SwiftUI-ScrollView-Demo
//
import SwiftUI

struct ConversationView: View {
    var conversation: Conversation

    var body: some View {
        NavigationView {
            ScrollView {
                VStack(spacing: 8) {
                    ForEach(self.conversation.messages) { message in
                        return BubbleView(message: message.body)
                    }
                }
            }
            .navigationBarTitle(Text("Conversation"))
        }
    }
}

struct ContentView_Previews: PreviewProvider {
    static var previews: some View {
        ConversationView(conversation: demoConversation)
    }
}

The preview is getting our demoConversation to render our example conversation.

Note that you also need to edit your SceneDelegate to pass the demoConversation as a parameter when setting up your ConversationView:

//  SceneDelegate.swift
// ...
        // Create the SwiftUI view that provides the window contents.
        let contentView = ConversationView(conversation: demoConversation)
// ...

We now render all the message in our demo Conversation, but we see that the conversation is top aligned and there is no API at the moment to control the content offset to render the display from the bottom of the scroll view on init.

We will fix that in a moment by writing a custom scroll view.

Bootstraping our custom scroll view

Add a new SwiftUI file called ReverseScrollView.swift in the project.

You can then first create your ReverseScroll View to first adapt the VStack to the parent view geometry, thanks to GeometryReader. By wrapping the content of the ReverseScrollView inside a GeometryReader wrapper, you can access info about the “outer” geometry (like the height).

Here is an initial version of the ReverseScrollView:

//
//  ReverseScrollView.swift
//  SwiftUI-ScrollView-Demo
//
//  Created by Mickaël Rémond on 24/09/2019.
//  Copyright © 2019 ProcessOne. All rights reserved.
//
import SwiftUI

struct ReverseScrollView<Content>: View where Content: View {
    var content: () -> Content

    var body: some View {
        GeometryReader { outerGeometry in
            // Render the content
            //  ... and set its sizing inside the parent
            self.content()
            .frame(height: outerGeometry.size.height)
            .clipped()
        }
    }
}

struct ReverseScrollView_Previews: PreviewProvider {
    static var previews: some View {
        ReverseScrollView {
            BubbleView(message: "Hello")
        }
        .previewLayout(.sizeThatFits)
    }
}

You can also replace the ScrollView in ConversationView to use our ReverseScrollView:

//...
        NavigationView {
            ReverseScrollView {
                VStack {
//...

In the Canvas, you can see the view is not scrolling or displayed properly with the last message at the bottom, but it is now properly fitting inside its parent view.

Aligning our view content to the bottom of the ReverseScrollView

The next step is to use preferences to pass the size of the content view to our ReverseScrollView. This will allow us to align the content of the view to the bottom of our custom ScrollView.

To do that we will leverage a SwiftUI feature called preferences. The preferences will be used to track the content size in the ReverseScrollView to properly set the content offset so that it is bottom aligned.

To track the content view height, we need to define a PreferenceKey that will keep track of the total height of the view. It will sum up the value of the height of all subviews in its reduce static function. To do so, add the following code to your ReverseScrollView file:

struct ViewHeightKey: PreferenceKey {
    static var defaultValue: CGFloat { 0 }
    static func reduce(value: inout Value, nextValue: () -> Value) {
        value = value + nextValue()
    }
}

You then need to make that ValueHeightKey a view modifier that will use a few tricks to read the content size and propagate the value:

  • The view modifier is embedding a geometry reader in our content background to read the geometry. It will work, as the size of the background content is the same as the content itself.
  • The view modifier will then set the Color.clear preference for that key to propagate them to the parent, listening to them using onPreferenceChange event. We are setting Color.clear preference, as we need to generate a view here and we actually want to hide that background. This trick makes it possible to read and propagate the preference, using a “dummy” background view.

Here is the view modifier extension for our ViewHeightKey:

extension ViewHeightKey: ViewModifier {
    func body(content: Content) -> some View {
        return content.background(GeometryReader { proxy in
            Color.clear.preference(key: Self.self, value: proxy.size.height)
        })
    }
}

Finally, we need to keep track of that Content View Height in a ReverseScrollView state. To do so:

  • We add a ContentHeight state to our ReverseScrollView.
  • We apply our view modifier ViewHeightKey to the content view.
  • We set our contentHeight State in the onPreferenceChange event for the ViewHeightKey values.
  • We update the content offset to the y axis on the content. To calculate the offset, we use the following offset function. It is using scrollview height and content height to calculate the offset so that the content is bottom-aligned (see below).
    // Calculate content offset
    func offset(outerheight: CGFloat, innerheight: CGFloat) -> CGFloat {
        print("outerheight: \(outerheight) innerheight: \(innerheight)")

        let totalOffset = currentOffset + scrollOffset
        return -((innerheight/2 - outerheight/2) - totalOffset)
    }

The content view is now bottom-aligned and the last message (Message 39) is properly displayed at the bottom of our custom scroll view.

You can check the final code in ReverseScrollView.swift.

Making our scroll view scrollable

The final step is to make our custom scroll view able to scroll, synchronized with vertical drag events.

First, we need to add two new states to keep track of the scroll position:

  • The current scroll offset set the content offset after the drag event ended.
  • The scroll offset is used to synchronize the content offset while the user is still dragging the view.

We will update those two states during on drag events onChanged and onEnded.

Here is the operation on ongoing drag event:

    func onDragChanged(_ value: DragGesture.Value) {
        // Update rendered offset
        print("Start: \(value.startLocation.y)")
        print("Start: \(value.location.y)")
        self.scrollOffset = (value.location.y - value.startLocation.y)
        print("Scrolloffset: \(self.scrollOffset)")
    }

and when drag ends, we store the current position, enforcing top and bottom limits:

    func onDragEnded(_ value: DragGesture.Value, outerHeight: CGFloat) {
        // Update view to target position based on drag position
        let scrollOffset = value.location.y - value.startLocation.y
        print("Ended currentOffset=\(self.currentOffset) scrollOffset=\(scrollOffset)")

        let topLimit = self.contentHeight - outerHeight
        print("toplimit: \(topLimit)")

        // Negative topLimit => Content is smaller than screen size. We reset the scroll position on drag end:
        if topLimit < 0 {
             self.currentOffset = 0
        } else {
            // We cannot pass bottom limit (negative scroll)
            if self.currentOffset + scrollOffset < 0 {
                self.currentOffset = 0
            } else if self.currentOffset + scrollOffset > topLimit {
                self.currentOffset = topLimit
            } else {
                self.currentOffset += scrollOffset
            }
        }
        print("new currentOffset=\(self.currentOffset)")
        self.scrollOffset = 0
    }

We also need to update the offset calculation to take into account the drag states:

    // Calculate content offset
    func offset(outerheight: CGFloat, innerheight: CGFloat) -> CGFloat {
        print("outerheight: \(outerheight) innerheight: \(innerheight)")

        let totalOffset = currentOffset + scrollOffset
        return -((innerheight/2 - outerheight/2) - totalOffset)
    }

Finally, you need to track the gesture on the content view, and link those gestures to our drag function:

            self.content()
            // ...
            .animation(.easeInOut)
            .gesture(
                 DragGesture()
                    .onChanged({ self.onDragChanged($0) })
                    .onEnded({ self.onDragEnded($0, outerHeight: outerGeometry.size.height)}))

We used the opportunity to also apply some animation to smoothen the end drag scroll position correction when hitting the limit. We now have a custom scroll view that starts at the bottom and can be properly scrolled with proper top and bottom limits.

Final project

You can download the final project example from Github: SwiftUI-ScrollView-Demo.

What’s next?

Let us know in the comments if you are interested in follow-up blog posts. Here are possible additional features that could make sense to illustrate in detail:

  • Handle device rotation
  • Show how to add messages to the conversation
  • Kinetic scroll with deceleration
  • Better bounce when hitting limits
  • Add a scroll bar
  • Kinetic animation for chat bubbles when scrolling (bit of springy behaviour)

Photo by Alvaro Reyes, Unsplash

by Mickaël Rémond at November 13, 2019 16:41

November 12, 2019

Ignite Realtime Blog

Openfire 4.4.4 Release

@akrherz wrote:

The Ignite Realtime Community is pleased to announce the release of version 4.4.4 of Openfire. This release addresses two regressions found with the 4.4.3 release. These regressions included the admin console security audit page not working and a problem with nested groups.

You can find downloads available with the following sha1sum values for the release artifacts:

7614de25698d4d65d2d2de2d97194b55c8c16e2f  openfire-4.4.4-1.i686.rpm
ea96c5d040909644015e463faa93624be6f08812  openfire-4.4.4-1.noarch.rpm
260dd74e174f087c244c19f167c70a5d3e9c2fff  openfire-4.4.4-1.x86_64.rpm
e792781936add645490eefd10f442a65a3ec980e  openfire_4.4.4_all.deb
a80ddaa11b4f64f6a0e73de2dea470c91d125452  openfire_4_4_4_bundledJRE.exe
c6dc907f7518af844fb9ed52cf7002688badb840  openfire_4_4_4_bundledJRE_x64.exe
87f30a8a0b375d45760435d81172a7794133f12f  openfire_4_4_4.dmg
fa17b1366e2b7b8afb8e7176b2f74a9345c3484e  openfire_4_4_4.exe
9b861abd07d91d72b0d3b487d7f48552a56b8177  openfire_4_4_4.tar.gz
9aad7853f8152a4e92ea23c02dd5981c0f2894f4  openfire_4_4_4_x64.exe
2ac84043b13574755bc7171129b704763c16602a  openfire_4_4_4.zip
0d87444af0e995496997673bb34ace290b1a351a  openfire_src_4_4_4.tar.gz
257eb6c36b2f7695bb7c7f898e3a03dbe865b2aa  openfire_src_4_4_4.zip

Please consider dropping by our web groupchat if you are interested in helping out with the development, documentation, and/or testing of Openfire. Please report any issues you find with Openfire in our discourse forums . Thanks for using Openfire!

Posts: 1

Participants: 1

Read full topic

by @akrherz daryl herzmann at November 12, 2019 20:31

HTTP File Upload plugin 1.1.2 released

@wroot wrote:

The Ignite Realtime community is happy to announce the immediate release of version 1.1.2 of the HTTP File Upload plugin for Openfire!

This plugin enables users to share files in one and one and group chats by uploading a file to a server and providing a link.

This update fixes a major bug in the previous update and provides an option to configure context root.

Your instance of Openfire should automatically display the availability of the update in the next few hours. Alternatively, you can download the new release of the plugin at the HTTP File Upload plugin archive page

For other release announcements and news follow us on Twitter

Posts: 1

Participants: 1

Read full topic

by @wroot wroot at November 12, 2019 20:16

November 09, 2019

Monal IM

iOS 13 and pushes

I can’t figure out what’s going on with ios13 and pushes. It’s highly inconsistent and doesn’t match documentation. There is a new beta for 4.1 available that is using a new push server. The pushes are generic “new message” indicators but are 100% reliable. I will improve it

by Anu at November 09, 2019 13:08

November 08, 2019

The XMPP Standards Foundation

XMPP Newsletter, 08 Nov 2019, Sprints, IoT, and early Twitter

Welcome to the XMPP newsletter covering the month of October 2019.

This is a community effort, and the process is fully documented: please help us spread the word, share this newsletter.

Articles

QuickBlox has written a blog post about the use of XMPP in 2019.

Quickblox examples

Martin aka debacle wrote an short piece about Dino and other updated software in Debian.

Ben Kwiecien rediscovered XMPP after years and compared ejabberd and Prosody in setting up a mobile friendly server.

Neetesh Mehrotra wrote about XMPP as a Communication Protocol for the IoT, comparing it to MQTT and HTTP.

Jon Erlichman published a photo of an early sketch of Twitter in 2005 by Jack Dorsey... which features Jabber at the core.

Jack Dorsey's 2005 Twitter

And last but not least, there are translations everywhere! The newsletter for the last month has been translated in German and Spanish!

Tutorials

Erlang Solutions republished their tutorial on how to build a complete iOS messaging app using XMPPframework, with its part 2.

Events

The pace of local meetings and sprints is still high within the XMPP community.

pep tells us the story of the "Sprint in the cold north", with sauna and crêpes, working on new groupchat bookmarks specification, file transfer interoperability issues, and a future landing page for new XMPP users.

Stockholm

Software releases

Servers

Erlang Solutions has released MongooseIM 3.5.0 (go check the changelog), and published an article on GDPR in instant messaging.

This month the Ignite Realtime community has released:

Marek Foss has announced ejabberd 19.09.1.

Clients and applications

Apps for users have been updated:

Salut à Toi progress notes for week 42 and week 44 have been published.

Libraries

QXmpp has been released in versions 1.0.1 and 1.1.0.

Extensions and specifications

Last Call

Title: XMPP Compliance Suites 2020

Abstract: This document defines XMPP application categories for different use cases (Core, Web, IM, and Mobile), and specifies the required XEPs that client and server software needs to implement for compliance with the use cases.

URL: https://xmpp.org/extensions/xep-0423.html

Updated

  • Version 0.5.0 of XEP-0405 (Mediated Information eXchange (MIX): Participant Server Requirements) has been released.
  • Version 0.3.0 of XEP-0402 (Bookmarks 2 (This Time it's Serious)) has been released.

Thanks all!

This XMPP Newsletter is produced collaboratively by the community.

Thanks to Nyco, MDosch, Daniel, Guus, Link Mauve, and mwild1 for their help in creating it!

Please follow our Twitter account @xmpp, and relay the XMPP news.

License

This newsletter is published under CC by-sa license: https://creativecommons.org/licenses/by-sa/4.0/

Subscribe by email

See you next month!

by nyco at November 08, 2019 12:00

November 06, 2019

Ignite Realtime Blog

HTTP File Upload plugin 1.1.1 released

@wroot wrote:

The Ignite Realtime community is happy to announce the immediate release of version 1.1.1 of the HTTP File Upload plugin for Openfire!

This plugin enables users to share files in one and one and group chats by uploading a file to a server and providing a link.

This update allows announced URLs to be HTTP instead of HTTPS and updates the main component library.

Your instance of Openfire should automatically display the availability of the update in the next few hours. Alternatively, you can download the new release of the plugin at the HTTP File Upload plugin archive page

For other release announcements and news follow us on Twitter

Posts: 3

Participants: 2

Read full topic

by @wroot wroot at November 06, 2019 19:15

Erlang Solutions

MongooseIM: Designed with privacy in mind

Let’s face it. We are living in an age where all technology players gather and process huge piles of user data, starting from our behavioural patterns and finishing on our location data. Hence, we receive personalized emails from online retail stores we have visited for just a second or personalized ads of stores in our vicinity that are displayed in our social media streams.

Consequently, more and more people become aware of how their privacy could be at risk with all of that data collection. In turn, the European Union strived to fight for consumer rights by implementing privacy guidelines in the form of the General Data Protection Regulation (GDPR), which governs how consumer data can be handled by third parties. In fact, over 200,000 privacy violation cases have been filed during the course of the last year followed with over €56m in fines for data breaches. Therefore, the stakes are high for all messaging service providers out there.

You might wonder: “Why should this matter to me? After all, my company is not in Europe.” Well, if any of the users of your messaging service are located in the EU, you are affected by GDPR as if you would host your service right there. Feeling uneasy? Don’t worry, MongooseIM team has got you covered. Please welcome the new MongooseIM that brings us full GDPR compliance.

Privacy by Design

A new concept has been defined with the dawn of GDPR - privacy by default. It is assumed that the software solution being used follows the principles of minimising and limiting, hiding and protecting, separating, aggregating, as well as, providing privacy by default.

Minimise and limit

The minimise and limit principle regards the amount of personal data gathered by a service. The general principle here is to take only the bare minimum required for a service to run instead of saving unnecessary data just in case. If more data is taken out, the unnecessary part should be deleted. Luckily, MongooseIM is using only the bare minimum of personal data provided by the users and relies on the users themselves to provide more if they wish to - e.g. by filling out the roster information. Moreover, since it is implementing XMPP and is open source, everybody has an insight as to how the data is processed.

Hide and protect

The hide and protect principle refers to the fact that user data should not be made public and should be hidden from plain view disallowing third parties to identify users through personal data or its interrelation. We have tackled that by handling the creation of JIDs and having recommendations regarding log collection and archiving.

What is this all about? See, JIDs are the central and focal point of MongooseIM operation as they are user unique identifiers in the system. As long as the JID does not contain any personally identifiable information like a name or a telephone number, the JID is far more than pseudo-anonymous and cannot be linked to the individual it represents. This is why one should refrain from putting personally identifiable information in JIDs. For that reason, our release includes a mechanism that allows automatic user creation with random JIDs that you can invoke by typing ‘register’ in the console. Specific JIDs are created by intentionally invoking a different command (register_identified).

Still, it is possible that MongooseIM logs contain personally identifiable information such as IP addresses that could correlate to JIDs. Even though the JID is anonymous, an IP address next to a JID might lead to the person behind it through correlation. That is why we recommend that installations with privacy in mind have their log level set to at least ‘warning’ level in order to avoid breaches of privacy while still maintaining log usability.

Separate and aggregate

The separate principle boils down to partitioning user data into chunks rather than keeping them in a monolithic DB. Each chunk should contain only the necessary private data for its own functioning. Such a separation creates issues when trying to identify a person through correlation as the data is scattered and isolated - hence the popularity of microservices. Since MongooseIM is an XMPP server written in Erlang, it is naturally partitioned into modules that have their own storage backends. In this way, private data is separated by default in MongooseIM and can be also handled individually - e.g. by deleting all the private data relating to one function.

The aggregation principle refers to the fact that all data should be processed in an aggregated manner and not in one focused on detailed personal cases. For instance, behavioural patterns should be representative of a concrete, not identifiable cohort rather than of a certain Rick Sanchez or Morty Smith. All the usage data being processed by MongooseIM is devoid of any personally identifiable traits and instead tracks metrics relevant to the health of the server. The same can be said for WombatOAM if you pair it with MongooseIM. Therefore, aggregation is supported by default.

Privacy by default

It is assumed that the user should be offered the highest degree of privacy by default. This is highly dependant on your own implementation of the service running on top of MongooseIM. However, if you follow our recommendations laid out in this post, you can be sure you implement it well on the backend side, as we do not differentiate between the levels of privacy being offered.

The Right of Access

According to GDPR, each user has the right of access to their own data that’s being kept by a service provider. That data includes not only personal data provided by the user but also all the derivate data generated by MongooseIM on its basis. That includes data held in mod_vcard, mod_roster, mod_mam, mod_offline, mod_pubsub, mod_private, mod_inbox, and logs. If we add a range of PubSub backends and MAM backends to the fray, one can see it gets complicated.

With MongooseIM we have put a lot of effort in order to make the retrieval as painless as possible for system administrators that oversee the day to day operations. That is why we have developed a mechanism you can start by executing the retrieve_personal_data command in order to collect all the personal and derivative data belonging to a user behind a specific JID. The command will execute for all the modules no matter if they are enabled or disabled. Then, all the relevant data is extracted per module and is returned to the user in the form of an archive.

In order to facilitate the data collection, we have changed the schemas for all of our MAM backends. This has been done to allow a swift data extraction since up till now it was very inefficient and resource hungry to run such a query. Of course, we have prepared migration strategies for the affected backends.

The Right to be Forgotten

The right to be forgotten is another one that goes alongside the right of access. Each user has the right to remove their footprint from the service. Since we know retrieval from all the modules listed above is problematic, removal is even worse.

We have implemented a mechanism that removes the user account leaving behind only the JID. You can run it by executing the “unregister” command. All of the private data not shared with other users is deleted from the system. In contrast, all of the private data that is shared with other users - e.g. group chats messages or PubSub flat nodes - is left intact as the content is not only owned by one party.

Logs are not a part of this action. If the log levels are set at least to ‘warning’, there is no personal data that can be tied to the JIDs in the first place so there is no need for removal.

Final Words on GDPR

The elements above make MongooseIM fully compliant with the current GDPR. We have continued our commitment to making MongooseIM the most GPDR complaint instant messaging platform in our recent release, MongooseIM 3.5. You can read about the latest changes here. However, you have to remember that this is only a piece of the puzzle. Since MongooseIM is a backend to a service there are other considerations that have to be fulfilled in order for the entire service to be GDPR compliant. Some of these considerations include process-oriented requirements of informing, enforcing, controlling, and demonstrating that have to be taken into consideration during service design.

Changelog

Please feel free to read the detailed changelog. Here, you can find a full list of source code changes and useful links.

Test our work on MongooseIM and share your feedback

Help us improve MongooseIM:

  1. Star our repo: esl/MongooseIM
  2. Report issues: esl/MongooseIM/issues
  3. Share your thoughts via Twitter
  4. Download Docker image with new release.
  5. Sign up to our dedicated mailing list to stay up to date about MongooseIM, messaging innovations and industry news.
  6. Check out our MongooseIM product page for more information on the MongooseIM platform.

We thought you might also be interested in:

XMPP Protocol - MongooseIM product

Erlang Solutiions - what we do

Our Erlang & Elixir consultancy

November 06, 2019 17:32

November 05, 2019

Erlang Solutions

Build a complete iOS messaging app using XMPPFramework - Part 2

First steps: XMPPFramework

Build a complete iOS messaging app using XMPPFramework is a tutorial that shows you how to build a fully functional instant messaging iOS app using the very cool XMPPFramework protocol and Swift3. In this part, we are going to get our hands dirty! To recap on the theory, or if you just landed here randomly, have a quick read through the first part, then get your Xcode ready and let’s start!

In this issue we are going to be integrating the library to our project, creating a connection with the server and authenticating. The XMPPFramework library is the most used XMPP library for iOS and macOS. At the beginning it may be a little bit overwhelming but after a few days working with it you will learn to love it.

Installing the library

Let’s create a brand new Xcode project and install the library. In this tutorial we are going to be using Swift 3. The easiest way to integrate XMPPFramework to the project is using CocoaPods.

Let’s create our Podfile using the pod init command in the folder where our .xcodeproj lives. There are thousands of forks but the maintained one is the original: robbiehanson/XMPPFramework.

So let’s add the pod to our Podfile and remember to uncomment the use_frameworks!.

use_frameworks!

target 'CrazyMessages' do
    pod 'XMPPFramework', :git=> 'git@github.com:robbiehanson/XMPPFramework.git', :branch => 'master'
end

 

Then pod install and CocoaPods is going to do its magic and create a .xcworkspace with the library integrated. Now we just need to import XMPPFramework in the files we want to use the library and that’s it.

 

Starting to build our Instant Messaging app

The most important thing in an XMPP application is the stream, that’s where we are going to “write” our stanzas, so we need an object that is going to hold it. We are going to create an XMPPController class with an XMPPStream:

import Foundation
import XMPPFramework

class XMPPController: NSObject {
    var xmppStream: XMPPStream

    init() {
        self.xmppStream = XMPPStream()  
    }

}

 

We are dealing with a highly asynchronous library here. For every action we are going to have a response some time in the future. To handle this XMPPFramework defines the XMPPStreamDelegate. So implementing that delegate is going to help us answer lots of different questions like: “How do I know when XMPP has successfully connected?”, “How do I know if I’m correctly authenticated?”, “How do I know if I received a message?”. XMPPStreamDelegate is your friend!

So we have our XMPPController and our XMPPStream, what do we need to do now? Configure our stream with the hostNameport and ourJID. To provide all this info to the controller we are going to make some changes to the init to be able to receive all these parameters:

enum XMPPControllerError: Error {
    case wrongUserJID
}

class XMPPController: NSObject {
    var xmppStream: XMPPStream

    let hostName: String
    let userJID: XMPPJID
    let hostPort: UInt16
    let password: String

    init(hostName: String, userJIDString: String, hostPort: UInt16 = 5222, password: String) throws {
        guard let userJID = XMPPJID(string: userJIDString) else {
            throw XMPPControllerError.wrongUserJID
        }

        self.hostName = hostName
        self.userJID = userJID
        self.hostPort = hostPort
        self.password = password

        // Stream Configuration
        self.xmppStream = XMPPStream()
        self.xmppStream.hostName = hostName
        self.xmppStream.hostPort = hostPort
        self.xmppStream.startTLSPolicy = XMPPStreamStartTLSPolicy.allowed
        self.xmppStream.myJID = userJID

        super.init()

        self.xmppStream.addDelegate(self, delegateQueue: DispatchQueue.main)
    }
}

 

Our next step is going to actually connect to a server and authenticate using our userJID and password, so we are adding a connect method to our XMPPController.

func connect() {
    if !self.xmppStream.isDisconnected() {
        return
    }

   try! self.xmppStream.connect(withTimeout: XMPPStreamTimeoutNone)
}

 

But how do we know we have successfully connected to the server? As I said earlier, we need to check for a suitable delegate method from XMPPStreamDelegate. After we connect to the server we need to authenticate so we are going to do the following:

extension XMPPController: XMPPStreamDelegate {

    func xmppStreamDidConnect(_ stream: XMPPStream!) {
        print("Stream: Connected")
        try! stream.authenticate(withPassword: self.password)
    }

    func xmppStreamDidAuthenticate(_ sender: XMPPStream!) {
        self.xmppStream.send(XMPPPresence())
        print("Stream: Authenticated")
    }
}

 

We need to test this. Let’s just create an instance of XMPPController in the AppDelegate to test how it works:

try! self.xmppController = XMPPController(hostName: "host.com",
                                     userJIDString: "user@host.com",
                                          password: "password")
self.xmppController.connect()

If everything goes fine we should see two messages in the logs but of course that’s not happening, we missed something. We never told to our xmppStream who was the delegate object! We need to add the following line after the super.init()

self.xmppStream.addDelegate(self, delegateQueue: DispatchQueue.main)

If we run the app again:

Stream: Connected
Stream: Authenticated

 

Success! We have our own XMPPController with a fully functional and authenticated stream!

Something that may catch your attention is how we are setting our delegate, we are not doing:

self.xmppStream.delegate = self

 

Why not? Because we can “broadcast” the events to multiple delegates, we can have 10 different objects implementing those methods. Also we can tell what’s the thread where we want to receive that call, in the previous example we want it in the main thread.

Getting a Log In

Our app is super ugly, let’s put on some makeup! We have nothing but an XMPPController and a hardcoded call in the AppDelegate. I’m going to create a ViewController that is going to be presented modally as soon as the app starts, that ViewController will have the neccesary fields/info to log in to the server.

I’m going to create a LogInViewControllerDelegate that is going to tell to our ViewController that the Log in button was pressed and that’s it. In that delegate implementation we are going to create our XMPPController, add the ViewControlleras delegate of the XMPPStream and connect!

extension ViewController: LogInViewControllerDelegate {

    func didTouchLogIn(sender: LogInViewController, userJID: String, userPassword: String, server: String) {
        self.logInViewController = sender

        do {
            try self.xmppController = XMPPController(hostName: server,
                                                     userJIDString: userJID,
                                                     password: userPassword)
            self.xmppController.xmppStream.addDelegate(self, delegateQueue: DispatchQueue.main)
            self.xmppController.connect()
        } catch {
            sender.showErrorMessage(message: "Something went wrong")
        }
    }
}

 

Why are we adding ViewController as a delegate of XMPPStream if our XMPPController alreay has that delegate implemented? Because we need to know if this connection and authentication was successfull or not in our ViewController so we are able to dismiss the LogInViewController or show an error message if something failed. This is why being able to add multiple delegates is so useful.

So as I said I’m going to make ViewController to comform to the XMPPStreamDelegate:

extension ViewController: XMPPStreamDelegate {

    func xmppStreamDidAuthenticate(_ sender: XMPPStream!) {
        self.logInViewController?.dismiss(animated: true, completion: nil)
    }

    func xmppStream(_ sender: XMPPStream!, didNotAuthenticate error: DDXMLElement!) {
        self.logInViewController?.showErrorMessage(message: "Wrong password or username")
    }

}

 

And that’s it! Our app can log in to our server as I’m showing here:

Logging!

We’ve been talking a lot about XMPP, stanzas and streams… but is there a way I can see the stream? Yes SR! XMPPFramework got us covered!

XMPPFramework ships with CocoaLumberJack, a pretty well known logging framework. We just need to configure it, set the logging level we want and that’s it. Logs are going to start showing up!

Configuring CocoaLumberjack

This is a really simple task, you just need to add to your func application(application: UIApplication, didFinishLaunchingWithOptions ... method the following line (remember to import CocoaLumberjack):

DDLog.add(DDTTYLogger.sharedInstance(), with: DDLogLevel.all)

I’m not going to paste here all the connection process log because it makes no sense to try to understand what’s going on at this stage of our learning. But I think showing what some stanzas look like is a good idea. To do this I’m going to be sending messages from Adium.

I’m going to send this <message/>:

<message to="test.user@erlang-solutions.com">
    <body>This is a message sent from Adium!</body>
</message>

 

Let’s see how it looks like when it reaches our app:

<message xmlns="jabber:client" from="iamadium@erlang-solutions.com/MacBook-Air" to="test.user@erlang-solutions.com">
   <body>This is a message sent from Adium!</body>
</message>

 

Let’s send a <presence/> from Adium:

<presence>
    <status>On vacation</status>
</presence>

 

We are receiving:

<presence xmlns="jabber:client" from="iamadium@erlang-solutions.com/MacBook-Air" to="test.user@erlang-solutions.com">
   <status>On vacation</status>
</presence>

 

No doubts at all right? We send something and we receive it on the other end! That’s it!

Test Time!

I want to be sure that you are understanding and following everything and not just copy and pasting from a tutorial (as I usually do 🙊). So if you are able to answer these questions you are on a good track!

  • Why am I sending a presence after successfully authenticating? What happens if I don’t send it?
  • What happens if I write a wrong server URL in the Log In form? How do I fix this problem if there is a problem…
  • How do I detect if suddenly the stream is disconnected from the server? (maybe a network outage?)
  • How do I detect if the user/password was wrong?

If you need help leave a message!

The sample project is on Github!

The next part is going to be on Roster, and if I will have space I would also like to add sending and receiving messages. I’ve been kind of super busy lately so I’m not sure when I’m going to be able to deliver the next issue but I’ll try to work on it as soon as I have some free minutes to spare!

PS: Also take a look at MongooseIM, our XMPP based open source mobile messaging platform. 

 

We thought you might also be interested in:

Our XMPP Protocol product - MongooseIM

Our portfolio of Erlang based products

XMPPFramework - build an iOS app part 1

November 05, 2019 15:12

November 04, 2019

Erlang Solutions

Build a complete iOS messaging app using XMPPframework-tutorial-part 1

YAXT??! Yet another XMPP tutorial?

 

Well, this is going to be another tutorial, but I’m going to try to make it a little bit different. This is an XMPP tutorial from an iOS developer’s perspective. I’ll try to answer all the questions I had when I started working in this area. This journey is going to go from no XMPP knowldege at all to having a fully functional instant messaging iOS app using this cool protocol. We are going to be using the super awesome (yet overwhelming at the beginning…) XMPPFramework library, and the idea is also to also mix in some iOS concepts that you are going to need for your app.

What’s XMPP?

 

From Wikipedia: Extensible Messaging and Presence Protocol (XMPP) is a communications protocol for message-oriented middleware based on XML.

This basically means XMPP is a protocol for exchanging stuff. What kind of stuff? Messages and presences. We all know what messages are, but what about presences? A presence is just a way of sharing a “status”, that’s it. You can be ‘online’, 'offline’, 'having lunch’, or whatever you want. Also there’s another important word: Extensible meaning it can grow. It started as an instant messaging protocol and it has grown into multiple fields for example IoT (Internet of Things). And last, but not least: every piece of information we are going to exchange under this protocol is going to be XML. I can heard you complaining but… Come on, it’s not that bad!

Why do we need XMPP? Why not just REST?

 

Well what other options do we have? On the one hand, a custom solution means building everything from scratch, that takes time. On the other hand, we have XMPP, a super tested technology broadly used by millions of people every day, so we can say that’s an advantage over a custom approach.

Everytime I talk about XMPP, someone asks me 'Why not just REST?’. Well, there is a misconception here. REST is not a protocol, it’s just a way of architecting a networked application; it’s just a standarized way of doing something (that I love btw). So let’s change the question to something that makes more sense: “Why not just build a custom REST chat application?”. The first thing that comes to my mind is what I already explained in the previous paragraph, but there is something else. How do I know when someone has sent me a message? For XMPP this is trivial: we have an open connection all the time so, as soon as a message arrives to the server, it will send us the message. We have a full-duplex. On the other hand, the only solution with REST is polling. We will need to ask the server for new messages from time to time to see if there is something new for us. That sucks. So, we will have to add a mechanism that allows us to receive the messages as soon as they are created, like SSE or WebSockets.

There is one more XMPP advantage over a custom REST chat application. REST uses HTTP, an application level protocol that is built on top of a transport level protocol: TCP. So everytime you want to use your REST solution, you will need HTTP, a protocol that is not always available everywhere (maybe you need to embed this in a cheap piece of hardware?). Besides, we have XMPP built on top of TCP that’s going to be always available.

What’s the basic stuff I need to know to get started?

 

Well, you know a lot already but let’s make a list. Lists are always good:

  • XMPP is built on top of TCP. It keeps an open connection all the time.
  • Client/Server architecture. Messages always go through a server.
  • Everything we send and receive is going to be XML and it’s called Stanza.
  • We have three different types of stanzas: iq, message and presence.
  • Every individual on the XMPP network is univocally identified by a JID (Jabber ID).
  • All the stanzas are cointained in a Stream. Let’s imagine the Stream as a white canvas where you and the server write the stanzas.
  • Stream, iq, message and presence are the core of XMPP. You can find everything perfectly detailed in RFC6120
  • XMPP can be extended to accomplish different stuff. Each extension is called XEP (XMPP Extension Protocol).

 

What’s a JID?

Jabber ID (JID) is how we univocally identify each individual in XMPP. It is the address to where we are going to send our stanzas.

This is how a JID looks like:

  • localpart: This is your username.
  • domainpart: Server name where the localpart resides.
  • resourcepart: This is optional, and it identifies a particular client for the user. For example: I can be logged in with andres@erlang-solutions.com on my iPhone, on my Android and on my mac at the same time… So all these will be the same localpart + domainpart but different resourcepart

I’m sure you have already noticed how similar the JID looks to a standard email address. This is because you can connect multiple servers together and the messages are rooted to the right user in the right server, just as email works. Pretty cool, right?

Sometimes you will see we have a JID with just the domain part. Why?! Because it’s also possible to send stanzas to a service instead of a user. A service? What’s a service?! Services are different pieces of an XMPP server that offer you some special functionality, but don’t worry about this right now, just remember: you can have JIDs without a localpart.

What’s a Stanza?

Stanza is the name of the XML pieces that we are going to be sending and receiving. The defined stanzas are: <message/><presence/> and <iq/>.

 

<message/>

This is a basic <message/> stanza. Everytime you want to send a message to someone (a JID), you will have to send this stanza:

<message from='andres@erlang-solutions.com/iphone' to='juana@erlang-solutions.com' type='chat'>
    <body>Hey there!</body>
</message>

 

<iq/>

It stands for Info/Query. It’s a query-action mechanism, you send an iq and you will get a response to that query. You can pair the iq-query with the iq-response using the stanza id.

For example, we send an iq to the server to do something (don’t pay attention to what we want to do… you just need to know there is an iq stanza and how the mechanism works):

<iq to='erlang-solutions.com' type='get' id='1'>
  <query xmlns='http://jabber.org/protocol/disco#items'/>
</iq>

And we get back another iq with the same id with the result of the previous query:

<iq from='erlang-solutions.com' to='ramabit@erlang-solutions.com/Andress-MacBook-Air' id='1' type='result'>
    <query xmlns='http://jabber.org/protocol/disco#items'>
        <item jid='muc.erlang-solutions.com'/>
        <item jid='muclight.erlang-solutions.com'/>
        <item jid='pubsub.erlang-solutions.com'/>
    </query>
</iq>

 

<presence/>

Used to exchange presence information, as you could have imagined. Usually presences are sent from the client to the server and broadcasted by it. The most basic, yet valid presence, to indicate to the server that a user is avaiable is:

<presence/>

After a sucessfull connection, you are not going to receive any <message/> until you make yourself available sending the previous presence.

If you want to make yourself unavailable, you just have to send:

<presence type="unavailable"></presence>

If we want to make the presences more useful, we can send something like this:

<presence>
      <status>On vacation</status>
</presence>

 

What’s a Stream?

Before answering this, let’s refresh our mind. What’s a Unix socket? From Wikipedia: A socket is a special file used for inter-process communication. These allows communication between two processes. So a socket is a file that can be written by two processes (in the same computer or in different computers in the same network). So the client is going to write to this file and server too.

Ok, but how is a socket related to a Stream? Well, we are going to be connected to a server using a socket, therefore we are going to have a 'shared file’ between the client and the server. This shared file is a white canvas where we are going to start writting our XML stanzas. The first thing we are going to write to this file is an opening <stream> tag! And there you go… that’s our stream.

Perfect, I understand what a stream is, but I still don’t understand how to send a message to the server. Well, the only thing we need to do to send a message is writting a <message/> stanza in our shared file. But what happens when the server wants to send me a message? Simple: it will write the message in the 'shared file’.

Are we ok so far?

 

I’m sure at this point you have questions like:

  • “What?! An active TCP connection open all the time? I’m used to REST! How am I going to do that?!” 

​           Easy, you don’t have to care about that any more! That’s why we are going to use the library, and it will take care of that.

  • “You said nothing about how to connect to the server!”

           Believe me, you don’t have to care about this either. If we start adding all this info, we are going to get crazy. Trust me, I’ve been there.

  • “What about encrypted messages? We need security! How are we going to handle this?”

          Again, you don’t have to care about this at this point. Baby steps!

 

You just need to be able to answer: “What’s XMPP?”, “How do you send a message?”, “How do you change your status in XMPP?”, “How do you ask something to the server?”, “What’s a Stream?”. If you can answer all that, you are WAY better than me when I started.

All the concepts we described so far are the core of XMPP.  To find out how to get started with XMPPFramework, how to connect to the server and authenticate a user, go to PART 2!

Also, check out MongooseIM, our XMPP based open source mobile messaging platform.

We thought you might also be interested in:

XMPPFramework - build an iOS app Part 2

Our portfolio of Erlang based products

November 04, 2019 16:13

XMPP Protocol | Build an iOS Instant Messaging App Part 2

First steps: XMPPFramework

Build a complete iOS messaging app using XMPPFramework is a tutorial that shows you how to build a fully functional instant messaging iOS app using the very cool XMPPFramework protocol and Swift3. In this part, we are going to get our hands dirty! To recap on the theory, or if you just landed here randomly, have a quick read through the first part, then get your Xcode ready and let’s start!

In this issue we are going to be integrating the library to our project, creating a connection with the server and authenticating. The XMPPFramework library is the most used XMPP library for iOS and macOS. At the beginning it may be a little bit overwhelming but after a few days working with it you will learn to love it.

Installing the library

Let’s create a brand new Xcode project and install the library. In this tutorial we are going to be using Swift 3. The easiest way to integrate XMPPFramework to the project is using CocoaPods.

Let’s create our Podfile using the pod init command in the folder where our .xcodeproj lives. There are thousands of forks but the maintained one is the original: robbiehanson/XMPPFramework.

So let’s add the pod to our Podfile and remember to uncomment the use_frameworks!.

use_frameworks!

target 'CrazyMessages' do
    pod 'XMPPFramework', :git=> 'git@github.com:robbiehanson/XMPPFramework.git', :branch => 'master'
end

 

Then pod install and CocoaPods is going to do its magic and create a .xcworkspace with the library integrated. Now we just need to import XMPPFramework in the files we want to use the library and that’s it.

 

Starting to build our Instant Messaging app

The most important thing in an XMPP application is the stream, that’s where we are going to “write” our stanzas, so we need an object that is going to hold it. We are going to create an XMPPController class with an XMPPStream:

import Foundation
import XMPPFramework

class XMPPController: NSObject {
    var xmppStream: XMPPStream

    init() {
        self.xmppStream = XMPPStream()  
    }

}

 

We are dealing with a highly asynchronous library here. For every action we are going to have a response some time in the future. To handle this XMPPFramework defines the XMPPStreamDelegate. So implementing that delegate is going to help us answer lots of different questions like: “How do I know when XMPP has successfully connected?”, “How do I know if I’m correctly authenticated?”, “How do I know if I received a message?”. XMPPStreamDelegate is your friend!

So we have our XMPPController and our XMPPStream, what do we need to do now? Configure our stream with the hostNameport and ourJID. To provide all this info to the controller we are going to make some changes to the init to be able to receive all these parameters:

enum XMPPControllerError: Error {
    case wrongUserJID
}

class XMPPController: NSObject {
    var xmppStream: XMPPStream

    let hostName: String
    let userJID: XMPPJID
    let hostPort: UInt16
    let password: String

    init(hostName: String, userJIDString: String, hostPort: UInt16 = 5222, password: String) throws {
        guard let userJID = XMPPJID(string: userJIDString) else {
            throw XMPPControllerError.wrongUserJID
        }

        self.hostName = hostName
        self.userJID = userJID
        self.hostPort = hostPort
        self.password = password

        // Stream Configuration
        self.xmppStream = XMPPStream()
        self.xmppStream.hostName = hostName
        self.xmppStream.hostPort = hostPort
        self.xmppStream.startTLSPolicy = XMPPStreamStartTLSPolicy.allowed
        self.xmppStream.myJID = userJID

        super.init()

        self.xmppStream.addDelegate(self, delegateQueue: DispatchQueue.main)
    }
}

 

Our next step is going to actually connect to a server and authenticate using our userJID and password, so we are adding a connect method to our XMPPController.

func connect() {
    if !self.xmppStream.isDisconnected() {
        return
    }

   try! self.xmppStream.connect(withTimeout: XMPPStreamTimeoutNone)
}

 

But how do we know we have successfully connected to the server? As I said earlier, we need to check for a suitable delegate method from XMPPStreamDelegate. After we connect to the server we need to authenticate so we are going to do the following:

extension XMPPController: XMPPStreamDelegate {

    func xmppStreamDidConnect(_ stream: XMPPStream!) {
        print("Stream: Connected")
        try! stream.authenticate(withPassword: self.password)
    }

    func xmppStreamDidAuthenticate(_ sender: XMPPStream!) {
        self.xmppStream.send(XMPPPresence())
        print("Stream: Authenticated")
    }
}

 

We need to test this. Let’s just create an instance of XMPPController in the AppDelegate to test how it works:

try! self.xmppController = XMPPController(hostName: "host.com",
                                     userJIDString: "user@host.com",
                                          password: "password")
self.xmppController.connect()

If everything goes fine we should see two messages in the logs but of course that’s not happening, we missed something. We never told to our xmppStream who was the delegate object! We need to add the following line after the super.init()

self.xmppStream.addDelegate(self, delegateQueue: DispatchQueue.main)

If we run the app again:

Stream: Connected
Stream: Authenticated

 

Success! We have our own XMPPController with a fully functional and authenticated stream!

Something that may catch your attention is how we are setting our delegate, we are not doing:

self.xmppStream.delegate = self

 

Why not? Because we can “broadcast” the events to multiple delegates, we can have 10 different objects implementing those methods. Also we can tell what’s the thread where we want to receive that call, in the previous example we want it in the main thread.

Getting a Log In

Our app is super ugly, let’s put on some makeup! We have nothing but an XMPPController and a hardcoded call in the AppDelegate. I’m going to create a ViewController that is going to be presented modally as soon as the app starts, that ViewController will have the neccesary fields/info to log in to the server.

I’m going to create a LogInViewControllerDelegate that is going to tell to our ViewController that the Log in button was pressed and that’s it. In that delegate implementation we are going to create our XMPPController, add the ViewControlleras delegate of the XMPPStream and connect!

extension ViewController: LogInViewControllerDelegate {

    func didTouchLogIn(sender: LogInViewController, userJID: String, userPassword: String, server: String) {
        self.logInViewController = sender

        do {
            try self.xmppController = XMPPController(hostName: server,
                                                     userJIDString: userJID,
                                                     password: userPassword)
            self.xmppController.xmppStream.addDelegate(self, delegateQueue: DispatchQueue.main)
            self.xmppController.connect()
        } catch {
            sender.showErrorMessage(message: "Something went wrong")
        }
    }
}

 

Why are we adding ViewController as a delegate of XMPPStream if our XMPPController alreay has that delegate implemented? Because we need to know if this connection and authentication was successfull or not in our ViewController so we are able to dismiss the LogInViewController or show an error message if something failed. This is why being able to add multiple delegates is so useful.

So as I said I’m going to make ViewController to comform to the XMPPStreamDelegate:

extension ViewController: XMPPStreamDelegate {

    func xmppStreamDidAuthenticate(_ sender: XMPPStream!) {
        self.logInViewController?.dismiss(animated: true, completion: nil)
    }

    func xmppStream(_ sender: XMPPStream!, didNotAuthenticate error: DDXMLElement!) {
        self.logInViewController?.showErrorMessage(message: "Wrong password or username")
    }

}

 

And that’s it! Our app can log in to our server as I’m showing here:

Logging!

We’ve been talking a lot about XMPP, stanzas and streams… but is there a way I can see the stream? Yes SR! XMPPFramework got us covered!

XMPPFramework ships with CocoaLumberJack, a pretty well known logging framework. We just need to configure it, set the logging level we want and that’s it. Logs are going to start showing up!

Configuring CocoaLumberjack

This is a really simple task, you just need to add to your func application(application: UIApplication, didFinishLaunchingWithOptions ... method the following line (remember to import CocoaLumberjack):

DDLog.add(DDTTYLogger.sharedInstance(), with: DDLogLevel.all)

I’m not going to paste here all the connection process log because it makes no sense to try to understand what’s going on at this stage of our learning. But I think showing what some stanzas look like is a good idea. To do this I’m going to be sending messages from Adium.

I’m going to send this <message/>:

<message to="test.user@erlang-solutions.com">
    <body>This is a message sent from Adium!</body>
</message>

 

Let’s see how it looks like when it reaches our app:

<message xmlns="jabber:client" from="iamadium@erlang-solutions.com/MacBook-Air" to="test.user@erlang-solutions.com">
   <body>This is a message sent from Adium!</body>
</message>

 

Let’s send a <presence/> from Adium:

<presence>
    <status>On vacation</status>
</presence>

 

We are receiving:

<presence xmlns="jabber:client" from="iamadium@erlang-solutions.com/MacBook-Air" to="test.user@erlang-solutions.com">
   <status>On vacation</status>
</presence>

 

No doubts at all right? We send something and we receive it on the other end! That’s it!

Test Time!

I want to be sure that you are understanding and following everything and not just copy and pasting from a tutorial (as I usually do 🙊). So if you are able to answer these questions you are on a good track!

  • Why am I sending a presence after successfully authenticating? What happens if I don’t send it?
  • What happens if I write a wrong server URL in the Log In form? How do I fix this problem if there is a problem…
  • How do I detect if suddenly the stream is disconnected from the server? (maybe a network outage?)
  • How do I detect if the user/password was wrong?

If you need help leave a message!

The sample project is on Github!

The next part is going to be on Roster, and if I will have space I would also like to add sending and receiving messages. I’ve been kind of super busy lately so I’m not sure when I’m going to be able to deliver the next issue but I’ll try to work on it as soon as I have some free minutes to spare!

PS: Also take a look at MongooseIM, our XMPP based open source mobile messaging platform. 

 

We thought you might also be interested in:

Our XMPP Protocol product - MongooseIM

Our portfolio of Erlang based products

XMPPFramework - build an iOS app part 1

November 04, 2019 16:05

Ignite Realtime Blog

Openfire 4.4.3 Release

@akrherz wrote:

The Ignite Realtime Community is pleased to announce the release of version 4.4.3 of Openfire. This release has a number of important fixes for security and stability of the server. This release signifies our effort of stabilizing a 4.4 release series while working on the next feature release of Openfire, version 4.5.

You can find downloads available with the following sha1sum values for the release artifacts:

9f1b5098738efa40a2845173d0c081a859fbbe17  openfire-4.4.3-1.i686.rpm
16689568ca06b9b2e09ebe61253ff3eea0cc6580  openfire-4.4.3-1.noarch.rpm
14653d028f8a839dcdd4f9e9b3abb32c678e1610  openfire-4.4.3-1.x86_64.rpm
73b914b5aec9869da4f66e8db2eb835000eb0f9f  openfire_4.4.3_all.deb
09abd58919716e9e882610decb677fe7c540707c  openfire_4_4_3_bundledJRE.exe
d5bc12d180f3e0a6345e528b1a7da790c94e2f84  openfire_4_4_3_bundledJRE_x64.exe
c1e77185310831578e77856f5c53c2aafae9634d  openfire_4_4_3.dmg
4f2fdb4506ed240c559553232c0a22511098592b  openfire_4_4_3.exe
69b742a36199d9af7de44400248a070611336d38  openfire_4_4_3.tar.gz
accf82451821487c977e3a4c681254a7e1f75e15  openfire_4_4_3_x64.exe
02d5164666cc7c8529681142a86052cc246dd2d8  openfire_4_4_3.zip
e3f43475d400d9e0aeeecc7d1f9d897325a905d1  openfire_src_4_4_3.tar.gz
5c755ded9b1749315d2f92ac4561b1896f7ffdbd  openfire_src_4_4_3.zip

Please consider dropping by our web groupchat if you are interested in helping out with the development, documentation, and/or testing of Openfire. Please report any issues you find with Openfire in our discourse forums. Thanks for using Openfire!

Posts: 1

Participants: 1

Read full topic

by @akrherz daryl herzmann at November 04, 2019 14:01

November 03, 2019

Jérôme Poisson

SàT progress note 2019-W44

Hello,

once again I've skipped last week progress note, as there was not much to say and I don't think it worth writing something just to write something, so in the future I'll publish a note every week when it worth it, and may skip one or 2 weeks in case of low activity or if I'm continuing a long task.

During the last 2 weeks I've been focusing on Cagou on Android. The 0.7 version is a proof of concept and give a idea of the potential, but it's not good enough to use as a daily client; I'll try to fix this for 0.8 version.

Cagou is now windowed on Android and not fullscreen by default anymore. It was initially fullscreen to gain some space, but at the end it's not a good idea as it hides many useful informations from the user. I still plan to add a fullscreen mode in some cases (when seeing photo albums for instance).

I've added Cagou to the share menu of Android, so it can now be used to share any content, be it some text data, or an image/file. That's one of the important things which was missing, and it's pretty cool to see it when sharing something on the phone.

When possible, a preview of the data is shown (works only for text or images at the moment), and then you just have to touch the name of the contact to who the file/data to. Below is screenshot of image sharing.

screenshot of share widget first draft

The share widget is not polished yet, it's a first draft. I've tried to keep the design simple, and the option to resize the image is only shown if the image is too big, in which case it is activated by default.
It will be possible to use the same widget on desktop, I just need to do the integration with the various APIs.

I'm now working on improving the performances for chat history. So far, all the messages were put in a BoxLayout. It's working, but it keeps all widgets in memory and this is now optimized for something which may become big like a chat history. On desktop this it is fine, but on Android it is quite noticeable that scrolling the chat history is slow.

A recent Kivy widget is usable to improve that: RecycleView. This RecycleView only calculates what is necessary to show the widgets actually displayed on the screen, this is far more efficient and we can expect a good performance boost by using that. So I've started to move the history to RecycleView, but I've run into problems for which I've created Kivy issues (#6580 and #6582). I expect that it will takes some time before those tickets are handled and fixed, I'm trying to work around those problems for now.

I have some test code but it's not fine yet. Once that done, I'll implement the loading of more history when we are at the top of the chat (either with an infinite scroll or a button).

There are a couple of other improvements that I would like to do before releasing 0.8 to make Cagou good to use on Android, but let's talk about them in a future progress note.

Last but not least, jnanar, the maintainer of SàT packages on Arch Linux, has just say that the dev versions of SàT and Cagou (the new Python 3 version) are now working (but not Libervia yet), and that tests/feebacks are welcome. Thanks to him for maintaining those packages!

by goffi at November 03, 2019 20:24

Paul Schaub

On “Clean Architecture”

I recently did what I rarely do: buy and read an educational book. Shocking, I know, but I can assure you that I’m fine 😉

The book I ordered is Clean Architecture – A Craftsman’s Guide to Software Structure and Design by Robert C. Martin. As the title suggests it is about software architecture.

I’ve barely read half of the book, but I’ve already learned a ton! I find it curious that as a halfway decent programmer, I often more or less know what Martin is talking about when he is describing a certain architectural pattern, but I often didn’t know said pattern had a name, or what consequences using said pattern really implied. It is really refreshing to see the bigger picture and having all the pros and cons of certain design decisions layed out in an overview.

One important point the book is trying to put across is how important it is to distinguish between important things like business rule, and not so important things as details. Let me try to give you an example.

Lets say I want to build a reactive Android XMPP chat application using Smack (foreshadowing? 😉 ). Lets identify the details. Surely Smack is a detail. Even though I’d be using Smack for some of the core functionalities of the app, I could just as well chose another XMPP library like babbler to get the job done. But there are even more details, Android for example.

In fact, when you strip out all the details, you are left with reactive chat application. Even XMPP is a detail! A chat application doesn’t care, what protocol you use to send and receive messages, heck it doesn’t even care if it is run on Android or on any other device (that can run java).

I’m still not quite sure, if the keyword reactive is a detail, as I’d say it is a more a programming paradigm. Details are things that can easily be switched out and/or extended and I don’t think you can easily replace a programming paradigm.

The book does a great job of identifying and describing simple rules that, when applied to a project lead to a cleaner, more structured architecture. All in all it teaches how important software architecture in general is.

There is however one drawback with the book. It constantly wants to make you want to jump straight into your next big project with lots of features, so it is hard to keep reading while all that excited ;P

If you are a software developer – no matter whether you work on small hobby projects or big enterprise products, whether or not you pursue to become a Software Architect – I can only recommend reading this book!

One more thought: If you want to support a free software project, maybe donating books like this is a way to contribute?

Happy Hacking!

by vanitasvitae at November 03, 2019 19:21

October 30, 2019

Ignite Realtime Blog

User Status Openfire plugin 1.2.2 released

@wroot wrote:

The Ignite Realtime community is happy to announce the immediate release of version 1.2.2 of the User Status plugin for Openfire!

User Status plugin automatically saves the last status (presence, IP address, logon and logoff time) per user and resource to userStatus table in the Openfire database.

This update fixes HSQLDB upgrade scripts.

Your instance of Openfire should automatically display the availability of the update in the next few hours. Alternatively, you can download the new release of the plugin at the User Status plugin archive page

For other release announcements and news follow us on Twitter

Posts: 1

Participants: 1

Read full topic

by @wroot wroot at October 30, 2019 19:49

Erlang Solutions

XMPP Protocol Build an iOS Instant Messaging Part 1

YAXT??! Yet another XMPP tutorial?

 

Well, this is going to be another tutorial, but I’m going to try to make it a little bit different. This is an XMPP tutorial from an iOS developer’s perspective. I’ll try to answer all the questions I had when I started working in this area. This journey is going to go from no XMPP knowldege at all to having a fully functional instant messaging iOS app using this cool protocol. We are going to be using the super awesome (yet overwhelming at the beginning…) XMPPFramework library, and the idea is also to also mix in some iOS concepts that you are going to need for your app.

What’s XMPP?

 

From Wikipedia: Extensible Messaging and Presence Protocol (XMPP) is a communications protocol for message-oriented middleware based on XML.

This basically means XMPP is a protocol for exchanging stuff. What kind of stuff? Messages and presences. We all know what messages are, but what about presences? A presence is just a way of sharing a “status”, that’s it. You can be ‘online’, 'offline’, 'having lunch’, or whatever you want. Also there’s another important word: Extensible meaning it can grow. It started as an instant messaging protocol and it has grown into multiple fields for example IoT (Internet of Things). And last, but not least: every piece of information we are going to exchange under this protocol is going to be XML. I can heard you complaining but… Come on, it’s not that bad!

Why do we need XMPP? Why not just REST?

 

Well what other options do we have? On the one hand, a custom solution means building everything from scratch, that takes time. On the other hand, we have XMPP, a super tested technology broadly used by millions of people every day, so we can say that’s an advantage over a custom approach.

Everytime I talk about XMPP, someone asks me 'Why not just REST?’. Well, there is a misconception here. REST is not a protocol, it’s just a way of architecting a networked application; it’s just a standarized way of doing something (that I love btw). So let’s change the question to something that makes more sense: “Why not just build a custom REST chat application?”. The first thing that comes to my mind is what I already explained in the previous paragraph, but there is something else. How do I know when someone has sent me a message? For XMPP this is trivial: we have an open connection all the time so, as soon as a message arrives to the server, it will send us the message. We have a full-duplex. On the other hand, the only solution with REST is polling. We will need to ask the server for new messages from time to time to see if there is something new for us. That sucks. So, we will have to add a mechanism that allows us to receive the messages as soon as they are created, like SSE or WebSockets.

There is one more XMPP advantage over a custom REST chat application. REST uses HTTP, an application level protocol that is built on top of a transport level protocol: TCP. So everytime you want to use your REST solution, you will need HTTP, a protocol that is not always available everywhere (maybe you need to embed this in a cheap piece of hardware?). Besides, we have XMPP built on top of TCP that’s going to be always available.

What’s the basic stuff I need to know to get started?

 

Well, you know a lot already but let’s make a list. Lists are always good:

  • XMPP is built on top of TCP. It keeps an open connection all the time.
  • Client/Server architecture. Messages always go through a server.
  • Everything we send and receive is going to be XML and it’s called Stanza.
  • We have three different types of stanzas: iq, message and presence.
  • Every individual on the XMPP network is univocally identified by a JID (Jabber ID).
  • All the stanzas are cointained in a Stream. Let’s imagine the Stream as a white canvas where you and the server write the stanzas.
  • Stream, iq, message and presence are the core of XMPP. You can find everything perfectly detailed in RFC6120
  • XMPP can be extended to accomplish different stuff. Each extension is called XEP (XMPP Extension Protocol).

 

What’s a JID?

Jabber ID (JID) is how we univocally identify each individual in XMPP. It is the address to where we are going to send our stanzas.

This is how a JID looks like:

  • localpart: This is your username.
  • domainpart: Server name where the localpart resides.
  • resourcepart: This is optional, and it identifies a particular client for the user. For example: I can be logged in with andres@erlang-solutions.com on my iPhone, on my Android and on my mac at the same time… So all these will be the same localpart + domainpart but different resourcepart

I’m sure you have already noticed how similar the JID looks to a standard email address. This is because you can connect multiple servers together and the messages are rooted to the right user in the right server, just as email works. Pretty cool, right?

Sometimes you will see we have a JID with just the domain part. Why?! Because it’s also possible to send stanzas to a service instead of a user. A service? What’s a service?! Services are different pieces of an XMPP server that offer you some special functionality, but don’t worry about this right now, just remember: you can have JIDs without a localpart.

What’s a Stanza?

Stanza is the name of the XML pieces that we are going to be sending and receiving. The defined stanzas are: <message/><presence/> and <iq/>.

 

<message/>

This is a basic <message/> stanza. Everytime you want to send a message to someone (a JID), you will have to send this stanza:

<message from='andres@erlang-solutions.com/iphone' to='juana@erlang-solutions.com' type='chat'>
    <body>Hey there!</body>
</message>

 

<iq/>

It stands for Info/Query. It’s a query-action mechanism, you send an iq and you will get a response to that query. You can pair the iq-query with the iq-response using the stanza id.

For example, we send an iq to the server to do something (don’t pay attention to what we want to do… you just need to know there is an iq stanza and how the mechanism works):

<iq to='erlang-solutions.com' type='get' id='1'>
  <query xmlns='http://jabber.org/protocol/disco#items'/>
</iq>

And we get back another iq with the same id with the result of the previous query:

<iq from='erlang-solutions.com' to='ramabit@erlang-solutions.com/Andress-MacBook-Air' id='1' type='result'>
    <query xmlns='http://jabber.org/protocol/disco#items'>
        <item jid='muc.erlang-solutions.com'/>
        <item jid='muclight.erlang-solutions.com'/>
        <item jid='pubsub.erlang-solutions.com'/>
    </query>
</iq>

 

<presence/>

Used to exchange presence information, as you could have imagined. Usually presences are sent from the client to the server and broadcasted by it. The most basic, yet valid presence, to indicate to the server that a user is avaiable is:

<presence/>

After a sucessfull connection, you are not going to receive any <message/> until you make yourself available sending the previous presence.

If you want to make yourself unavailable, you just have to send:

<presence type="unavailable"></presence>

If we want to make the presences more useful, we can send something like this:

<presence>
      <status>On vacation</status>
</presence>

 

What’s a Stream?

Before answering this, let’s refresh our mind. What’s a Unix socket? From Wikipedia: A socket is a special file used for inter-process communication. These allows communication between two processes. So a socket is a file that can be written by two processes (in the same computer or in different computers in the same network). So the client is going to write to this file and server too.

Ok, but how is a socket related to a Stream? Well, we are going to be connected to a server using a socket, therefore we are going to have a 'shared file’ between the client and the server. This shared file is a white canvas where we are going to start writting our XML stanzas. The first thing we are going to write to this file is an opening <stream> tag! And there you go… that’s our stream.

Perfect, I understand what a stream is, but I still don’t understand how to send a message to the server. Well, the only thing we need to do to send a message is writting a <message/> stanza in our shared file. But what happens when the server wants to send me a message? Simple: it will write the message in the 'shared file’.

Are we ok so far?

 

I’m sure at this point you have questions like:

  • “What?! An active TCP connection open all the time? I’m used to REST! How am I going to do that?!” 

​           Easy, you don’t have to care about that any more! That’s why we are going to use the library, and it will take care of that.

  • “You said nothing about how to connect to the server!”

           Believe me, you don’t have to care about this either. If we start adding all this info, we are going to get crazy. Trust me, I’ve been there.

  • “What about encrypted messages? We need security! How are we going to handle this?”

          Again, you don’t have to care about this at this point. Baby steps!

 

You just need to be able to answer: “What’s XMPP?”, “How do you send a message?”, “How do you change your status in XMPP?”, “How do you ask something to the server?”, “What’s a Stream?”. If you can answer all that, you are WAY better than me when I started.

All the concepts we described so far are the core of XMPP.  To find out how to get started with XMPPFramework, how to connect to the server and authenticate a user, go to PART 2!

Also, check out MongooseIM, our XMPP based open source mobile messaging platform.

We thought you might also be interested in:

XMPPFramework - build an iOS app Part 2

Our portfolio of Erlang based products

October 30, 2019 16:23

October 29, 2019

Tigase Blog

BeagleIM 3.3 and Siskin IM 5.3 released

New versions of XMPP clients for Apple's mobile and desktop platforms have been released.

Keep reading for the list of changes.

by wojtek at October 29, 2019 13:49

October 26, 2019

Erlang Solutions

Escalus 4.0.0: faster and more extensive XMPP testing

Escalus 4.0.0 is the newest version of our XMPP client library for Erlang, a component of our MongooseIM platform of tools and services. Created as a tool for convenient testing of XMPP servers, it can also be used as a standalone Erlang application. In contrast to other XMPP clients, Escalus provides a rich testing API (assertions, stanza building…) and a high degree of flexibility that allows to completely redefine its behaviour.

It is used by Erlang Solutions for integration, load and stress testing of MongooseIM XMPP server. The newest version, Escalus 4.0.0, now supports Erlang 20 and offers the following features and improvements:

New, RapidXML-based XML parser

Escalus 4.0.0 includes a new version of our exml Erlang XML parser! The parser’s processing layer was rewritten from scratch, extensively profiled and optimised; as a result, the new parser is on average 5 times faster than before when encoding and decoding XML elements.

Included XML viewer

Files containing traced XMPP stanzas now include a powerful XML viewer by Sergey Chikuyonok. The viewer shows elements in a readable manner and supports collapsing, searching by name, XPath support, and more. Inspecting your XMPP traffic has never been easier!

Message pipelining

The new XML parser also supports message pipelining, which means that multiple stanzas can be safely sent and received at once by the client. This can greatly improve efficiency of many use cases and in the future will be used to speed up Escalus’ XMPP connection times by an order of magnitude.

Transport-level metadata for stanzas

The transport process used to receive and deliver XMPP stanzas can now pass on metadata that will be included with elements delivered to the Escalus’ client. Currently the metadata set by existing transport implementations is limited to a receive timestamp, but can be easily overridden and extended by a custom solution.

Other improvements

TCP connections are now set up with a nodelay option which decreases communication latency between Escalus and XMPP server. If you don’t like the change, Escalus 4.0.0 also supports easy overriding of TCP options that allows you to disable it, and more - including the nitty-gritty settings of buffer sizes or ToS flag.

A host of refactoring changes, subtle API improvements, bug fixes, performance enhancements, and an increased number of useful functions to inspect and construct your stanzas all lead to a smoother experience while testing all of your most important features.

Test our work on Escalus 4.0.0 and share your feedback

Check out our full release notes for Escalus 4.0.0 & 3.1.0 on GitHub. Play around and let us know what you think!

Help us improve the MongooseIM platform:

  1. Star our repo: esl/escalus
  2. Report issues: esl/MongooseIM/issues
  3. Share your thoughts via Twitter: twitter.com/MongooseIM
  4. Download Docker image with new release
  5. Sign up to our dedicated mailing list to stay up to date about MongooseIM, messaging innovations and industry news.
  6. Check out our MongooseIM product page for more information on the MongooseIM platform.

We thought you might also be interested in:

Our RabbitMQ solutions

RabbitMQ monitoring with wombatOAM

MongooseIM - XMPP Protocol

October 26, 2019 17:40