Planet Jabber

October 11, 2019


FrenchKit Conference: Day 2 Highlights

Yesterday, I shared my highlights on FrenchKit Conference 2019, Day 1. Today, I will talk about FrenchKit Day 2.

Swift Superpowers

Swift Superpowers were three lightning talks presented by David Bonnet, mostly focused on server-side Swift, and spread out during the day. He covered the following topics:

  • Vapor 3 code examples
  • Networking example with SwiftNIO and the new cross-platform URLSession client (you need to import FoundationNetworking on Linux)
  • Debug of Vapor apps in Xcode and cross-platform testing using XCTest and Docker

Like the day before, those lightning talks were great and refreshing.

Swift Generics: It isn’t supposed to hurt

Rob Napier explained how to incrementally refactor your code using generics to make it more flexible and avoid repeating the same patterns.

The talk was interesting and condensed, and was followed by an even more content-packed masterclass at the end of the day. The topic is fascinating, but the masterclass format does not do it justice. It is very hard to keep your focus for 90 minutes on hardcore generics code refactors.

The takeaway, as always, is to start with concrete code first and then work out the generics.

Note encryption: 10 lines for encryption, 1500 lines for key management

Next was a mind-blowing talk on cryptography by Anastasiia Voitova. She knows here topic and the story of how her company Cossack Labs helped implement end-to-end encryption in the Bear note taking app was very enlightening.

SwiftPM’s New Resolver: Can it Resolve the Conflicts in my Relationship?

I quite like Mert Buran talk. He managed to make a dry topic, Swift Package topic resolution, interesting.

And Mert talk was the most funny of the conference, with example of conflict resolution in his rock band.

Finally, my takeaway is that Swift Package Manager is a nice piece of Open Source code that you can read and learn from.

This is not rocket (data) science

It was another great talk. Hervé Beranger covered all the concrete use cases of AI that you can add today in your iOS applications:

  • Voice Interfaces
  • Translations
  • Semantic search
  • Sentiment analysis
  • Suggested related searches
  • Smart replies
  • Home-made text classifiers

He gave a lot of examples, with the Apple API you can use to implement them.

An introduction to property-based testing

I was happy to see a talk on property-based testing. At ProcessOne, we have worked with Quviq to use Erlang Quickcheck on our code base. I have been passionate about Property based testing.

Vincent Pradeilles did a good job presenting Property-based testing and explaining how you can use it as an addition to more traditional testing methods.

Swift has quite a nice implementation of Quickcheck, called SwiftCheck. You should give it a try.

Vincent also mentioned lightweight alternative to help testing with random data, such as using faker library to generate random data, using a fuzzy testing approach.

Shipping a Catalyst app: The Good, the Bad and the Ugly

This was a nice talk by Peter Steinberger sharing his feedback on Catalyst. Catalyst is a framework to port UIKit based iPadOS based apps to macOS. He shared the trick they had to use to make their PDFViewer app feel more native on macOS using Catalyst.

For example, Peter was forced to bridge to AppKit for some features like:

  • Toolbar with toolbar editor
  • NSSearch
  • NSCursor changes (MacSupport bundle)
  • Open Recent menu support (particularly painful to implement as Catalyst Apps are sandboxed, like all Mac AppStore apps)

And that’s a wrap

The final talk was from Olivier Halligon and was one of my favorite. He managed to explain why and how to use property wrappers in Swift. Property wrappers were added in Swift 5.1 and it is my favorite new feature. If you use it with a clear purpose, it can really help improve your code.

You can check his slides: And that’s a Wrap!

Make sure to check the last tip on how to use property wrappers to avoid implement JSON Codable manually for your struct just to handle properly the date format. This is a tip I expect to reuse often.


Overall, the FrenchKit Conference was really great. It was a well organised event packed with great talks.

If you could not attend, you can always catch up with the talks on video when they get released.

by Mickaël Rémond at October 11, 2019 10:46

FrenchKit Conference: Day 1 Highlights

FrenchKit is an iOS and macOS developer conference held in Paris. The fourth edition took place on October 7-8, 2019. I was attending this conference for the first time and really enjoyed the gathering. The conference is well organised, with a lot of excellent speakers. There is a true good vibe coming from the FrenchKit community, at least from a French perspective, but I feel this impression was shared by international visitors I spoke with.

Here are my highlights for the first day.

Swift Pills

Swift Pills were three small lightning talks presented at various time during the first day by Vincent Pradeilles.

The talks were nice, sharing small 5-minutes tips on various topics:
Encapsulating [weak self]
Let’s talk about @autoclosure

I liked the talks, but more than that, I feel that spreading some lightning talks throughout the day is a very nice idea. It breaks the rhythm of long talks sequence and feels like a refreshing break. It felt better than to gather all lightning talks at once. I think other conferences should also adopt that idea.

Animations with SwiftUI

Chris Eidhof is famous for his live coding video tutorials on He demoed how to build a custom shake animation with SwiftUI, using a GeometryEffect view modifier. You can read more about the topic he presented on his blog: SwiftUI: Shake Animation

Understanding Combine

The second talk was presented by Daniel Steinberg, another prominent speaker in the Swift development community. He demonstrated how the reactive pattern differs from the delegation pattern, and did a great job explaining the use of Combine with UIKit (Yes, you can use @Published on UIKit View Controllers).

SwiftUI with Redux

Thomas Ricouard managed to pack a nice Redux introduction in a short time, with examples coming from its app MovieSwiftUI. You can check his Open Source app on GitHub: MovieSwiftUI.

As he says, Apple hinted at unidirectional data flow (a-la Redux) at WWDC, but did not precisely described how to leverage it. Thomas talks helped to fill the gap.

Showcase driven development

This talk by Jérôme Alves introduced the approach used at Heetch to help shorten each development iteration. The principle is simple: If you want to avoid large pull requests with complex merges, you need to focus on short-lived development branches changing only a few things. You thus split big features into sub-features and even split them by layers (model, view, etc) to have smaller branches to merge.

However, how do you demonstrate work-in-progress? The solution presented by Jérôme is to introduce a menu in the Debug build of their application dedicated to demonstrating your unfinished work. The showcase menu in the test app offers a showcase browser that can be used to show prototype views, workflows, animations, etc. It helps discussing the next steps and requirements with management, marketing or customers.

Finally, even if Jérôme recommends to roll your own code to set up that feature, Heetch released ShowcaseKit to give you an idea of what they did and how they are using the showcase approach.

Slide to unlock: Building Custom UI with UIKit and CoreAnimation

I also enjoyed this talk from Joel Kin. His talk is about showing that if you master the various layers of Apple UI frameworks (from UIKit to Core Animation), it often make more sense to develop a custom component based on Apple standard tooling than to introduce and adapt a dependency from GitHub.

He does a great job showing how he built a custom slide-to-unlock component, using less lines of code than in alternative open source components.

The example is quite convincing, starting from a UISlider and ending up with a great looking “slide-to-unlock” components, complete with even the shimmer effect.

You can checkout the code on GitHub: SlideToUnlock


To end the day, you had a choice of seven possible workshops. I attended “Exploring Combine”, hosted by Florent Pillet and Antoine Van Der Lee.

While I had already explored many concepts, it was a nice introduction to Apple Reactive programming framework.

The workshop is a nice way to meet people and share ideas through peer programming to solve the proposed exercices.

Conclusion for Day 1

Day 1 at FrenchKit was a blast. The day was packed with great talks. The atmosphere was very friendly and the venue was great.

Stay tuned for my highlights of Day 2!

by Mickaël Rémond at October 11, 2019 10:34

October 09, 2019


SwiftNIO: Understanding Futures and Promises

SwiftNIO is Apple non-blocking networking library. It can be used to write either client libraries or server frameworks and works on macOS, iOS and Linux.

It is built by some of the Netty team members. It is a port of Netty, a high performance networking framework written in Java and adapted in Swift. SwiftNIO thus reuses years of experience designing a proven framework.

If you want to understand in depth how SwiftNIO works, you first have to understand underlying concept. I will start in this article by explaining the concept of futures and promises. The ‘future’ concept is available in many languages, including Javascript and C#, under the name async / await, or in Java and Scala, under the name ‘future’.

Futures and promises

Futures and promises are a set of programming abstractions to write asynchronous code. The principle is quite simple: Your asynchronous code will return a promise instead of the final result. The code calling your asynchronous function is not blocked and can do other operations before it finally decides to block and wait for the result, if / when it really needs to.

Even if the words ‘futures’ and ‘promises’ are often use interchangeably, there is a slight difference in meaning. They represent different points of view on the same value placeholder. As explained in Wikipedia page:

A future is a read-only placeholder view of a variable, while a promise is a writable, single assignment container which sets the value of the future.

In other words, the future is what the client code receives and can use as a handler to access a future value when it has been defined. The promise is the handler the asynchronous code will keep to write the value when it is ready and thus fulfill the promise by returning the future value.

Let’s see in practice how futures and promises work.

SwiftNIO comes with a built-in futures and promises library. The code lies in EventLoopFuture. Don’t be fooled by the name: It is a full-featured ‘future’ library that you can use in your code to handle asynchronous operations.

Let’s see how you can use it to write asynchronous code, without specific reference to SwiftNIO-oriented networking operations.

Note: The examples in this blog post should work both on macOS and Linux.

Anatomy of SwiftNIO future / promise implementation

Step 1: Create an EventLoopGroup

The basic skeleton for our example is as follow:

import NIO

let evGroup = MultiThreadedEventLoopGroup(numberOfThreads: System.coreCount)

// Do things

try evGroup.syncShutdownGracefully()

We create an EventLoopGroup and shut it down gracefully at the end. A graceful shutdown means it will properly terminate the asynchronous jobs being executed.

An EventLoopGroup can be seen as a provider of an execution context for your asynchronous code. You can ask the EventLoopGroup for an execution context: an EventLoop. Basically, each execution context, each EventLoop is a thread. EventLoops are used to provide an environment to run your your concurrent code.

In the previous example, we create as many threads as we have cores on our computer (System.coreCount), but the number of threads could be as low as 1.

Step 2: Getting an EventLoop to execute your promise

In SwiftNIO, you cannot model concurrent execution without at least an event loop. For more info on what I mean by concurrency, you can watch Rob Pike excellent talk: Concurrency is not parallelism.

To execute your asynchronous code, you need to ask the EventLoopGroup for an EventLoop. You can use the method next() to get a new EventLoop, in a round-robin fashion.

The following code gets 10 event loops, using the next() method and prints the event loops information.

import NIO

print("System cores: \(System.coreCount)\n")
let evGroup = MultiThreadedEventLoopGroup(numberOfThreads: System.coreCount)

for _ in 1...10 {
    let ev =

// Do things

try evGroup.syncShutdownGracefully()

On my system, with 8 cores, I get the following result:

System cores: 8

SelectableEventLoop { selector = Selector { descriptor = 3 }, scheduledTasks = PriorityQueue(count: 0): [] }
SelectableEventLoop { selector = Selector { descriptor = 4 }, scheduledTasks = PriorityQueue(count: 0): [] }
SelectableEventLoop { selector = Selector { descriptor = 5 }, scheduledTasks = PriorityQueue(count: 0): [] }
SelectableEventLoop { selector = Selector { descriptor = 6 }, scheduledTasks = PriorityQueue(count: 0): [] }
SelectableEventLoop { selector = Selector { descriptor = 7 }, scheduledTasks = PriorityQueue(count: 0): [] }
SelectableEventLoop { selector = Selector { descriptor = 8 }, scheduledTasks = PriorityQueue(count: 0): [] }
SelectableEventLoop { selector = Selector { descriptor = 9 }, scheduledTasks = PriorityQueue(count: 0): [] }
SelectableEventLoop { selector = Selector { descriptor = 10 }, scheduledTasks = PriorityQueue(count: 0): [] }
SelectableEventLoop { selector = Selector { descriptor = 3 }, scheduledTasks = PriorityQueue(count: 0): [] }
SelectableEventLoop { selector = Selector { descriptor = 4 }, scheduledTasks = PriorityQueue(count: 0): [] }

The description represents the id of the EventLoop. As you can see, you can use 8 different loops before being assigned again an existing EventLoop from the same group. As expected, this matches our number of cores.

Note: Under the hood, most EventLoops are designed using NIOThread, so that the implementation can be cross-platform: NIO threads are build using Posix Threads. However, some platform specific loops, like NIO Transport service, are free from multiplatform constrains and are using Apple Dispatch library. It means, if you are targeting only MacOS, you can thus use SwiftNIO futures and promises directly with Dispatch library. Libdispatch being shipped with Swift on Linux now, it could also work there, but I did not test it yet.

Step 3: Executing async code

If you just want to execute async code without needing to wait back for a result, you can just pass a function closure to the EventLoop.execute(_:):

import NIO

print("System cores: \(System.coreCount)\n")
let evGroup = MultiThreadedEventLoopGroup(numberOfThreads: System.coreCount)

let ev =

ev.execute {
    print("Hello, ")
// sleep(1)

try evGroup.syncShutdownGracefully()

In the previous code, the order in which “Hello, ” and “world!” are displayed is undetermined.

Still, on my computer, it is clear that they are not executed in order. The print-out in the execute block is run asynchronously, after the execution of the print-out in the main thread:

System cores: 8


You can uncomment the sleep(1) function call to insert one second of delay before the second print-out instruction. It will “force” the ordering by delaying the main thread print-out and have “Hello, world!” be displayed in sequence.

Step 4: Waiting for async code execution

Adding timers in your code to order code execution is a very bad practice. If you want to wait for the async code execution, that’s where ‘futures’ and ‘promises’ comes into play.

The following code will submit an async code to run on an EventLoop. The asyncPrint function will wait for a given delay in the EventLoop and then print the passed string.

When you call asyncPrint, you get a promise in return. With that promise, you can call the method wait() on it, to wait for the completion of the async code.

import NIO

// Async code
func asyncPrint(on ev: EventLoop, delayInSecond: Int, string: String) -> EventLoopFuture<Void> {
    // Do the async work
    let promise = ev.submit {
        sleepAndPrint(delayInSecond: 1, string: string)

    // Return the promise
    return promise

func sleepAndPrint(delayInSecond: UInt32, string: String) {

// ===========================
// Main program

print("System cores: \(System.coreCount)\n")
let evGroup = MultiThreadedEventLoopGroup(numberOfThreads: System.coreCount)

let ev =

let future = asyncPrint(on: ev, delayInSecond: 1, string: "Hello, ")

try future.wait()


try evGroup.syncShutdownGracefully()

The print-out will pause for one second on the “Waiting…” message and then display the “Hello, ” and “world!” messages in order.

Step 5: Promises and futures result

When you need a result, you need to return a promise that will give you more than just a signaling letting you know the processing is done. Thus, it will not be a promise of a Void result, but can return a more complex promise.

First, let’s see a promise of a simple result that cannot fail. In your async code, you can return a promise that will return the result of factorial calculation asynchronously. Your code will promise to return a Double and then submit the job to the EventLoop.

import NIO

// Async code
func asyncFactorial(on ev: EventLoop, n: Double) -> EventLoopFuture<Double> {
    // Do the async work
    let promise = ev.submit { () -> Double in
        return factorial(n: n)

    // Return the promise
    return promise

// I would use a BigInt library to go further small number factorial calculation
// but I do not want to introduce an external dependency.
func factorial(n: Double) -> Double {
    if n >= 0 {
        return n == 0 ? 1 : n * factorial(n: n - 1)
    } else {
        return 0 / 0

// ===========================
// Main program

print("System cores: \(System.coreCount)\n")
let evGroup = MultiThreadedEventLoopGroup(numberOfThreads: System.coreCount)

let ev =

let n: Double = 10
let future = asyncFactorial(on: ev, n: n)


let result = try future.wait()

print("fact(\(n)) = \(result)")

try evGroup.syncShutdownGracefully()

The code will be executed asynchronously and the wait() method will return the result:

System cores: 8

fact(10.0) = 3628800.0

Step 6: Success and error processing

If you are doing network operations, like downloading a web page for example, the operation can fail. You can thus handle more complex result, that can be either success or error. SwiftNIO offers a ready made type call ResultType.

In the next example, we will show an async function performing an asynchronous network operation using callbacks and returning a future result of ResultType. The ResultType will wrap either the content of the downloaded page or a failure callback.

import NIO
import Foundation

// =============================================================================
// MARK: Helpers

struct CustomError: LocalizedError, CustomStringConvertible {
    var title: String
    var code: Int
    var description: String { errorDescription() }

    init(title: String?, code: Int) {
        self.title = title ?? "Error"
        self.code = code

    func errorDescription() -> String {
        "\(title) (\(code))"

// MARK: Async code
func asyncDownload(on ev: EventLoop, urlString: String) -> EventLoopFuture<String> {
    // Prepare the promise
    let promise = ev.makePromise(of: String.self)

    // Do the async work
    let url = URL(string: urlString)!

    let task = URLSession.shared.dataTask(with: url) { data, response, error in
        print("Task done")
        if let error = error {
        if let httpResponse = response as? HTTPURLResponse {
            if (200...299).contains(httpResponse.statusCode) {
                if let mimeType = httpResponse.mimeType, mimeType == "text/html",
                    let data = data,
                    let string = String(data: data, encoding: .utf8) {
            } else {
                // TODO: Analyse response for better error handling
                let httpError = CustomError(title: "HTTP error", code: httpResponse.statusCode)
        let err = CustomError(title: "no or invalid data returned", code: 0)

    // Return the promise of a future result
    return promise.futureResult

// =============================================================================
// MARK: Main

print("System cores: \(System.coreCount)\n")
let evGroup = MultiThreadedEventLoopGroup(numberOfThreads: System.coreCount)

let ev =


let future = asyncDownload(on: ev, urlString: "")
future.whenSuccess { page in
    print("Page received")
future.whenFailure { error in
    print("Error: \(error)")

// Timeout: As processing is async, we can handle timeout by just waiting in
// main thread before quitting.
// => Waiting 10 seconds for completion

try evGroup.syncShutdownGracefully()

The previous code will either print “Page received” when the page is downloaded or print the error. As your success handler receives the page content itself, you could do something with it (print it, analyse it, etc.)

Step 7: Combining async work results

Where promises really shine is when you would like to chain several async calls that depend on each other. You can thus write a code that appear logically in a sequence, but that is actually asynchronous.

In the following code, we reuse the previous async download function and process several pages by counting the number of div elements in all pages.

By wrapping this processing in a reduce function, we can download all web pages in parallel. We receive the page data as they are downloaded and we keep track of a counter of the number of div per page. Finally, we return the total as the future result.

This is a more involved example that should give you a better taste of what developing with futures and promises looks like.

import NIO
import Foundation

// =============================================================================
// MARK: Helpers

struct CustomError: LocalizedError, CustomStringConvertible {
    var title: String
    var code: Int
    var description: String { errorDescription() }

    init(title: String?, code: Int) {
        self.title = title ?? "Error"
        self.code = code

    func errorDescription() -> String {
        "\(title) (\(code))"

// MARK: Async code
func asyncDownload(on ev: EventLoop, urlString: String) -> EventLoopFuture<String> {
    // Prepare the promise
    let promise = ev.makePromise(of: String.self)

    // Do the async work
    let url = URL(string: urlString)!

    let task = URLSession.shared.dataTask(with: url) { data, response, error in
        print("Loading \(url)")
        if let error = error {
        if let httpResponse = response as? HTTPURLResponse {
            if (200...299).contains(httpResponse.statusCode) {
                if let mimeType = httpResponse.mimeType, mimeType == "text/html",
                    let data = data,
                    let string = String(data: data, encoding: .utf8) {
            } else {
                // TODO: Analyse response for better error handling
                let httpError = CustomError(title: "HTTP error", code: httpResponse.statusCode)
        let err = CustomError(title: "no or invalid data returned", code: 0)

    // Return the promise of a future result
    return promise.futureResult

// =============================================================================
// MARK: Main

print("System cores: \(System.coreCount)\n")
let evGroup = MultiThreadedEventLoopGroup(numberOfThreads: System.coreCount)

var futures: [EventLoopFuture<String>] = []

for url in ["", "", ""] {
    let ev =
    let future = asyncDownload(on: ev, urlString: url)

let futureResult = EventLoopFuture.reduce(0, futures, on: { (count: Int, page: String) -> Int in
    let tok =  page.components(separatedBy:"<div")
    let p_count = tok.count-1
    return count + p_count

futureResult.whenSuccess { count in
    print("Result = \(count)")
futureResult.whenFailure { error in
    print("Error: \(error)")

// Timeout: As processing is async, we can handle timeout by just waiting in
// main thread before quitting.
// => Waiting 10 seconds for completion

try evGroup.syncShutdownGracefully()

This code actually builds a pipeline as follows:


Futures and promises are at the heart of SwiftNIO design. To better understand SwiftNIO architecture, you need to understand the futures and promises mechanism.

However, there is more concepts that you need to master to fully understand SwiftNIO. Most notably, inbound and outbound channels allow you to structure your networking code into reusable components executed in a pipeline.

I will cover more SwiftNIO concepts in a next blog post. In the meantime, please send us your feedback :)

by Mickaël Rémond at October 09, 2019 15:36

Erlang Solutions

MongooseIM: Designed with privacy in mind

Let’s face it. We are living in an age where all technology players gather and process huge piles of user data, starting from our behavioural patterns and finishing on our location data. Hence, we receive personalized emails from online retail stores we have visited for just a second or personalized ads of stores in our vicinity that are displayed in our social media streams.

Consequently, more and more people become aware of how their privacy could be at risk with all of that data collection. In turn, the European Union strived to fight for consumer rights by implementing privacy guidelines in the form of the General Data Protection Regulation (GDPR), which governs how consumer data can be handled by third parties. In fact, over 200,000 privacy violation cases have been filed during the course of the last year followed with over €56m in fines for data breaches. Therefore, the stakes are high for all messaging service providers out there.

You might wonder: “Why should this matter to me? After all, my company is not in Europe.” Well, if any of the users of your messaging service are located in the EU, you are affected by GDPR as if you would host your service right there. Feeling uneasy? Don’t worry, MongooseIM team has got you covered. Please welcome the new MongooseIM that brings us full GDPR compliance.

Privacy by Design

A new concept has been defined with the dawn of GDPR - privacy by default. It is assumed that the software solution being used follows the principles of minimising and limiting, hiding and protecting, separating, aggregating, as well as, providing privacy by default.

Minimise and limit

The minimise and limit principle regards the amount of personal data gathered by a service. The general principle here is to take only the bare minimum required for a service to run instead of saving unnecessary data just in case. If more data is taken out, the unnecessary part should be deleted. Luckily, MongooseIM is using only the bare minimum of personal data provided by the users and relies on the users themselves to provide more if they wish to - e.g. by filling out the roster information. Moreover, since it is implementing XMPP and is open source, everybody has an insight as to how the data is processed.

Hide and protect

The hide and protect principle refers to the fact that user data should not be made public and should be hidden from plain view disallowing third parties to identify users through personal data or its interrelation. We have tackled that by handling the creation of JIDs and having recommendations regarding log collection and archiving.

What is this all about? See, JIDs are the central and focal point of MongooseIM operation as they are user unique identifiers in the system. As long as the JID does not contain any personally identifiable information like a name or a telephone number, the JID is far more than pseudo-anonymous and cannot be linked to the individual it represents. This is why one should refrain from putting personally identifiable information in JIDs. For that reason, our release includes a mechanism that allows automatic user creation with random JIDs that you can invoke by typing ‘register’ in the console. Specific JIDs are created by intentionally invoking a different command (register_identified).

Still, it is possible that MongooseIM logs contain personally identifiable information such as IP addresses that could correlate to JIDs. Even though the JID is anonymous, an IP address next to a JID might lead to the person behind it through correlation. That is why we recommend that installations with privacy in mind have their log level set to at least ‘warning’ level in order to avoid breaches of privacy while still maintaining log usability.

Separate and aggregate

The separate principle boils down to partitioning user data into chunks rather than keeping them in a monolithic DB. Each chunk should contain only the necessary private data for its own functioning. Such a separation creates issues when trying to identify a person through correlation as the data is scattered and isolated - hence the popularity of microservices. Since MongooseIM is an XMPP server written in Erlang, it is naturally partitioned into modules that have their own storage backends. In this way, private data is separated by default in MongooseIM and can be also handled individually - e.g. by deleting all the private data relating to one function.

The aggregation principle refers to the fact that all data should be processed in an aggregated manner and not in one focused on detailed personal cases. For instance, behavioural patterns should be representative of a concrete, not identifiable cohort rather than of a certain Rick Sanchez or Morty Smith. All the usage data being processed by MongooseIM is devoid of any personally identifiable traits and instead tracks metrics relevant to the health of the server. The same can be said for WombatOAM if you pair it with MongooseIM. Therefore, aggregation is supported by default.

Privacy by default

It is assumed that the user should be offered the highest degree of privacy by default. This is highly dependant on your own implementation of the service running on top of MongooseIM. However, if you follow our recommendations laid out in this post, you can be sure you implement it well on the backend side, as we do not differentiate between the levels of privacy being offered.

The Right of Access

According to GDPR, each user has the right of access to their own data that’s being kept by a service provider. That data includes not only personal data provided by the user but also all the derivate data generated by MongooseIM on its basis. That includes data held in mod_vcard, mod_roster, mod_mam, mod_offline, mod_pubsub, mod_private, mod_inbox, and logs. If we add a range of PubSub backends and MAM backends to the fray, one can see it gets complicated.

With MongooseIM we have put a lot of effort in order to make the retrieval as painless as possible for system administrators that oversee the day to day operations. That is why we have developed a mechanism you can start by executing the retrieve_personal_data command in order to collect all the personal and derivative data belonging to a user behind a specific JID. The command will execute for all the modules no matter if they are enabled or disabled. Then, all the relevant data is extracted per module and is returned to the user in the form of an archive.

In order to facilitate the data collection, we have changed the schemas for all of our MAM backends. This has been done to allow a swift data extraction since up till now it was very inefficient and resource hungry to run such a query. Of course, we have prepared migration strategies for the affected backends.

The Right to be Forgotten

The right to be forgotten is another one that goes alongside the right of access. Each user has the right to remove their footprint from the service. Since we know retrieval from all the modules listed above is problematic, removal is even worse.

We have implemented a mechanism that removes the user account leaving behind only the JID. You can run it by executing the “unregister” command. All of the private data not shared with other users is deleted from the system. In contrast, all of the private data that is shared with other users - e.g. group chats messages or PubSub flat nodes - is left intact as the content is not only owned by one party.

Logs are not a part of this action. If the log levels are set at least to ‘warning’, there is no personal data that can be tied to the JIDs in the first place so there is no need for removal.

Final Words on GDPR

The elements above make MongooseIM fully compliant with the current GDPR. We have continued our commitment to making MongooseIM the most GPDR complaint instant messaging platform in our recent release, MongooseIM 3.5. You can read about the latest changes here. However, you have to remember that this is only a piece of the puzzle. Since MongooseIM is a backend to a service there are other considerations that have to be fulfilled in order for the entire service to be GDPR compliant. Some of these considerations include process-oriented requirements of informing, enforcing, controlling, and demonstrating that have to be taken into consideration during service design.


Please feel free to read the detailed changelog. Here, you can find a full list of source code changes and useful links.

Test our work on MongooseIM and share your feedback

Help us improve MongooseIM:

  1. Star our repo: esl/MongooseIM
  2. Report issues: esl/MongooseIM/issues
  3. Share your thoughts via Twitter
  4. Download Docker image with new release.
  5. Sign up to our dedicated mailing list to stay up to date about MongooseIM, messaging innovations and industry news.
  6. Check out our MongooseIM product page for more information on the MongooseIM platform.

October 09, 2019 10:48

October 08, 2019

Ignite Realtime Blog

inVerse Openfire plugin released

@wroot wrote:

The Ignite Realtime community is happy to announce the release of version of the inVerse plugin for Openfire!

This update brings changes and fixes from Converse 5.0.2-5.0.4 versions. Among them a few security patches, a new option for message corrections and other bug fixes.

Your instance of Openfire should automatically display the availability of the update. Alternatively, you can download the new release of the plugin at the inVerse plugin’s archive page

For other release announcements and news follow us on Twitter

Posts: 1

Participants: 1

Read full topic

by @wroot wroot at October 08, 2019 20:07

October 06, 2019


The concepts behind Swift UI: Introduction

SwiftUI is a new application framework from Apple that complements UIKit and AppKit. It is expected to be the future of UI definition on Apple platform, unifying GUI application writing using the same framework on all Apple platforms, from Apple Watch to iOS and MacOS.

A change in architectural patterns: From MVC to MVVM

SwiftUI marks a shift in application architectural pattern. UIKit and AppKit applications are structured using the Model-View-Controller (MVC) paradigm. The Controller is the central abstraction layer used to link the Model, containing the business logic, to the View. This extra layer exists to make the code more maintainable and reusable.

The MVC pattern has been dominant in User Interface frameworks since it was first introduced in Smalltalk in the 80s.

However, it tends to lead to excessive boilerplate code. Moreover, as a programmer you have to fight the tendency to put large part of the application logic into the controller: The controller can become bloated and complex over time in large applications.

With SwiftUI, Apple is moving application development to the Model-View-ViewModel architectural pattern. This pattern was invented at Microsoft, as a variation of the Presentation model by Martin Fowler.
This pattern simplifies the design of event-based applications by offering a clearer dependency models between components and a clearer data flow. In MVVM, the view has a reference to the ViewModel and the ViewModel has a reference to the Model, and that’s it.

The MVVM pattern is used in many popular Javascript frameworks these days (ReactJS, Vue.js, Ember.js, etc.) and is becoming more and more widely used to design user interfaces.

The reference flow is simple but at some point, you will also want to let the Model update the View to reflect state changes.

You can still use the proven delegation approach to let the model update the view, but in the case, you are adding circular references and you are missing a big part of what makes SwiftUI great.

The concept of Binding is used to finally allow the Model to update the View. In SwiftUI, you can pass bindings to link view states to the underlying model. As defined in Apple documentation, the Binding is a “manager for a value that provides a way to mutate it”. It defines a two way-connection propagating user changes on the application states in the view, but also updating the view when the model changes.

If you want to get further, you can design part of your application logic using Combine. Combine is a reactive framework to help build more complex applications using SwiftUI, taking care of event propagation and asynchronous programming, using a publish and subscribe model. Combine can be used for example to add a networking layer to your application.

A change in View design approach

SwiftUI also change radically how Views are designed and maintained. In the past, Views on iOS have often been designed graphically, using Interface Builder or Storyboard editor. Defining the Views in code was possible, but was a much larger effort. The interface was thus often designed graphically, adding constraints to define how the interface was supposed to change when displayed at different screen sizes. This resulted in interfaces being described in XML files, making teamwork difficult, as merge conflicts were painful to solve.

With SwiftUI, Apple introduced a declarative approach to design UI. The UI is always designed as code, but Apple also provides a canvas, to show a live preview of your view and guide the developer when developing the application. You can also interact directly on the canvas, and the code is updated accordingly.

At this time, the canvas is sometimes a bit unstable, but when it works, the overall process feels quite magical. No more fiddling with lots of panels defining values and constraints. SwiftUI simplifies the process by providing sane defaults for most system-provided views, but also remove the need for constraints by providing containers for your view that define the logical relation between the subviews. For example, a VStack will define a group of view that need to be rendered as a vertical stack on after the other. HStack define views that are laid out horizontally. ZStack defines set of view that are rendered one into the other (merged), etc. By combining high-level containers, you can describe the relationship between the views in your applications and let the system process by itself how to render the view.

Finally, I said previously that SwiftUI complements UIKit and AppKit. This is because SwiftUI works well with both UIKit and AppKit. Apple has defined tools and patterns so that both framework can work together. It means you can integrate with high-level components that are not directly available in SwiftUI, like MapKit or WebKit. It also means you can convert your application incrementally, adding SwiftUI views to an existing UIKit for example.

Get prepared for Apple platform future

For now SwiftUI is still in version 1. It already feels like the future of Apple application design, but you have to be prepared to face little bugs from time to time in Xcode or on-device rendering.

The fact that SwiftUI can only target iOS 13 / MacOS 10.15 / WatchOS 6 / TvOS 13 devices will also limit adoption, as developers often have to target applications running on several different OS versions.

However, it is consistent with the fact that you can expect SwiftUI to improve and mature considerably in the coming months.

With SwiftUI, Apple is attempting to solve several issues:

  • They want to unify the UI framework on all their platforms.
  • They are simplifying the programming model, offering a path to migrate from MVC to MVVM pattern.
  • As a side effect, they are reworking their concurrency pattern, introducing Combine as the underlying framework for implementing reactive programming in Swift-based applications.

If you are developing a brand new application today that is going to be released in a few months from now, it makes sense to consider SwiftUI.

At the very least, you should start learning SwiftUI and Combine now to be ready to adopt it when you feel it is possible for your kind of app.

by Mickaël Rémond at October 06, 2019 10:33

October 03, 2019


The concepts behind SwiftUI: What is the keyword “some” doing?

When you want to start learning SwiftUI, you create a project and get a basic example view with just a Text field. That default view is pretty simple, but already introduces many new concepts. Let’s focus in this article on the some keyword.

Note: If you are new to SwiftUI, you may want to read the The concepts behind Swift UI: Introduction.

The default SwiftUI content view

Here is an empty default view:

import SwiftUI

struct ContentView: View {
    var body: some View {
        Text("Hello World")

struct ContentView_Previews: PreviewProvider {
    static var previews: some View {

The ContentView is an implementation of the View protocol. The View protocol requires a body computed property of type some View. So, you may be wondering what is that some keyword doing?

Opaque Types

The some keyword was introduced in Swift 5.1 and is used to define an Opaque Type. An opaque type is a way to return a type without needing to provide details on the concrete type itself. It limits what callers need to know about the returned type, only exposing information about its protocol compliance. Using an opaque type is a way to let the compiler decide what would be the concrete type of a function return, based on the actual returned value, limiting the options to the types that comply to a given protocol.

So, in SwiftUI case, “some View” means that the body will always be implementing the View protocol, but the concrete implementation type does not need to be known by the caller.

As explained in Swift documentation on Opaque Types:

You can think of an opaque type like being the reverse of a generic type.

An opaque type lets the function implementation pick the type for the value it returns in a way that’s abstracted away from the code that calls the function.

In other words, standard generic placeholders are filled by the caller. When you call a generic function, you are constraining the generic types to the types passed to that function. You can think of opaque types as a kind of generic function where placeholder types are filled by the implementation return type.


In the following code using opaque type, the body will always be of type Text:

struct ContentView: View {
    var x: Bool = false

    var body: some View {
        if x {
            return Text("This is true")
        } else {
            return Text("This is false")

At compile time, the function is known to return a Text view. The code is valid because Text view is implementing the View protocol.

However, the following code does not contain a valid opaque type, as the two code branches return different concrete types. The first branch returns a Text view, the second branch returns a VStack view.

struct ContentView: View {
    var x: Bool = false

    var body: some View {
        if x {
            return Text("This is true")
        } else {
            return VStack { Text("This is false") }

The returned type cannot be known at compile time, so this is not a valid opaque type.

If we compare with just protocol constraints, the following code is valid because the computed value body is not an opaque type but a protocol. The type information are “dropped” by the compiler and the only method that can be called on that result are the one implemented by the protocol (in our case, none).

protocol P {}
struct S1: P {}
struct S2: P {}

var x: Bool = false

var body: P {
    if x {
        return S1()
    } else {
        return S2()


If you are interested to dig deeper, you should read Opaque Types documentation, especially the section on the ‘Differences Between Opaque Types and Protocol Types’.

On the surface returning an opaque type or a protocol can look similar, but the opaque types solve many limitations encountered when using directly protocol return types. The main limitations is that functions returning a protocol are not nestable, because a value of a protocol type doesn’t conform to that protocol. Opaque types preserve the underlying type information through associated types and thus can be nested. That’s the reason why Opaques Types were needed to implement SwiftUI.

There is a lot more to learn on Opaque Types, as it is an advanced concept. You do not need to master opaque types to be using SwiftUI but, as always, it is still better to understand the fundamental idea to progress on your learning path.

by Mickaël Rémond at October 03, 2019 12:11

Maxime Buquet

October 01, 2019

The XMPP Standards Foundation

XMPP Newsletter, 01 Oct 2019, FOSDEM 2020, modernization of XMPP, peer networks

Welcome to the XMPP newsletter covering the month of September.

New this month: we've made explicit that this newsletter can be shared and adapted as defined by the CC by-sa 4.0 license, and we've added the credits as this is a community effort.

Be kind, inform your friends and colleagues: forward this newsletter!

Please submit your XMPP/Jabber articles, tutorials or blog posts on our wiki.


Ralph Meijer, chairman of the Board of Directors of the XSF, has written an introductory piece about "XMPP Message Attaching, Fastening, References", specifications currently in progress.

In case of internet shut down, whether by disaster or voluntarily, peer networks are useful. Monal uses Airdrop (wifi and bluetooth) along with XMPP and OMEMO end-to-end encryption.

Monal and Airdrop screenshots

JabberFr, the French XMPP/Jabber community, has again translated the XMPP newsletter in French, merci beaucoup.

We have started a very minimalistic communication guide to help promoting a project through social networks and other means such as blog posts. Could be valuable for those involved in XMPP projects and would like to get ideas on how to reach out to different communities.


As usual every year, that time has come: we proudly announce the XMPP Summit 24 and the XSF participation to FOSDEM. The 24th XMPP Summit will happen on Thursday 30th and Friday 31st of January, and FOSDEM will be held on Saturday 1st and Sunday 2nd of February. Prepare for the community gathering; now is a good time to start booking your flights!


Software releases


Réda Housni Alaoui is reviving the formerly dormant Vysper XMPP server (pronounced as "whisper").

Xabber Server v.0.9 alpha is released, with quick installation and management panel and interesting innovations.

MongooseIM 3.4.1 is out now with an important security upgrade, fixing a vulnerability that allowed any logged in user to crash the node with malicious stanza on certain (but popular) configurations. Read the whole thread on Twitter for more information.

The igniterealtime community has a lot of news:

  • Openfire 4.4.2: "This release should better support server to server (s2s) connections, fix a few admin console XSS-style issues, and improve client session stability."
  • The Fastpath Service plugin has been released in versions 4.4.4 and 4.4.5, bringing support for managed queued chat requests, such as a support team might use.
  • Search plugin 1.7.3: "This update adds protection against CSRF and XSS attacks."
  • Monitoring Service plugin 1.8.1: "This hotfix update adds protection against XSS attacks on Archiving Settings page."

Jérôme Sautret, from ProcessOne, has announced ejabberd 19.09, that brings improved automatic certificate management stack.

The Prosody team have just released a new update to their stable branch, Prosody 0.11.3 which includes performance and compatibility improvements among other fixes.

Clients and applications

Profanity, has been released in version 0.7.0 after five months of work, bringing OMEMO end-to-end encryption, followed by a 0.7.1 bug fix release.

Multiple vulnerabilities have been found in Dino, please update as soon as possible if you are a Dino user.

Converse has been released in versions 5.0.2 and 5.0.3 fixing security issues among others. Converse users may find useful the new plugin to verify HTTP Requests via XMPP from Agayon. For developers, there is a Converse Docker image by odajay.

Conversations has been released in versions 2.5.8, 2.5.9, 2.5.10 and 2.5.11.

Monal 4 with iOS 13 support and dark mode is out. Mac update is expected to be released for October.


Christopher Muclumbus, a listing and search engine of public XMPP chats, has been updated with a visual redesign, group chat avatars, link to anonymous web chat and logs if available, and software version pie chart.

Extensions and specifications

This month, there were three proposed XEPs, and two updated. No XEPs in Last Call, New, or Obsoleted.


Message Fastening

Abstract: This specification defines a way for payloads on a message to be marked as being logically fastened to a previous message.


XMPP Compliance Suites 2020

Abstract: This document defines XMPP protocol compliance levels.


Authorization Tokens

Abstract: This document defines an XMPP protocol extension for issuing authentication tokens to client applications and provides methods for managing сlient connections.



  • Version 0.13.1 of XEP-0280 (Message Carbons) has been released. Changelog: Add clear example on problematic (spoofed) carbon messages and that they need to be handled. (gl). URL:
  • Version 1.16.0 of XEP-0060 (Publish-Subscribe) has been released. Changelog:Add a pubsub#rsm disco#info feature to clear confusion (edhelas). URL:

See you in November!

This XMPP Newsletter is a community collaborative effort. Thanks to Nÿco, Daniel, Jwi, and MDosch for aggregating the news. Thanks Nÿco, Seve, Jwi, and Matt for the copywriting. Thanks Guus, Seve, Jonas for the reviews.

Please follow and relay the XMPP news on our Twitter account @xmpp.


This newsletter is published under the CC by-sa license:

by nyco at October 01, 2019 07:00

September 30, 2019


Writing a Custom Scroll View with SwiftUI in a chat application

When you are writing a chat application, you need to be able to have some control on the chat view. The chat view typically starts aligned at the end of the conversation, which is the bottom of the screen. When you have received more messages and they cannot fit on one screen anymore, you can scroll back to display them.

However, using only SwiftUI standard ScrollView to build such a conversation view is not possible in the first release of SwiftUI (as of Xcode 11), as no API is provided to define the content offset and start with the content at the bottom. It means that you would be stuck to displaying your chat window and the top and scroll down to see the new messages, which is not acceptable.

In this article, I will show you how to write a custom scroll view to get the intended behaviour. It will not yet be a fully-featured scroll view, with all the bells and whistles you expect (like, for example, a scroll bar), but it will be a good example showing what is required to build SwiftUI custom views. You can then build on that example to add the features you need.

Note: The code was tested on Xcode 11.0.

What is a scroll view?

A scroll view is a view that lets you see more content that can fit on the screen by dragging the content on the screen to display more.

From a technical point of view, a scroll view contains another view that is larger than the screen. It will then handle the “drag” events to synchronize the displayed part of the content view.

Custom SwiftUI scroll view principles

To create a custom SwiftUI view, you generally need to master two SwiftUI concepts:

  • GeometryReader: GeometryReader is a view wrapper that let child views access sizing info of their parent view.
  • Preferences: Preferences are used for the reverse operation. They can be used to propagate information from the child views the parent. They are usually attached to the parent by creating a view modifier.

Creating an example project

We will be creating an example project, with an example conversation file in JSON format to illustrate the view rendering.

Create a new project for iOS, and select the Single View App template:

Choose a name for the new project (i.e. SwiftUI-ScrollView-Demo) and make sure you select SwiftUI for User Interface:

You are ready to start your example project.

Creating a basic view with the conversation loaded

Create a Models group in SwiftUI-ScrollView-Demo group and create a Swift file name Conversation.swift in that group.

It will contain a minimal model to allow rendering a conversation and populate a demo conversation with test messages, to test our ability to put those messages in a scroll view.

//  Conversation.swift
//  SwiftUI-ScrollView-Demo
struct Conversation: Hashable, Codable {
    var messages: [Message] = []

struct Message: Hashable, Codable, Identifiable {
    public var id: Int
    let body: String
    // TODO: add more fields (from, to, timestamp, read indicators, etc).

// Create demo conversation to test our custom scroll view.
let demoConversation: Conversation = {
    var conversation = Conversation()
    for index in 0..<40 {
        var message = Message(id: index, body: "message \(index)")
    return conversation

Preparing the BubbleView

In this article, the message BubbleView will not look like a chat bubble. It will just be a raw cell with a gray background.

Create a new SwiftUI file named BubbleView,swift in the SwiftUI-ScrollView-Demo group.

The content of the file is as follows:

//  BubbleView.swift
//  SwiftUI-ScrollView-Demo
import SwiftUI

struct BubbleView: View {
    var message: String

    var body: some View {
        HStack {

struct BubbleView_Previews: PreviewProvider {
    static var previews: some View {
        BubbleView(message: "Hello")

It renders a right-aligned text message, with padding and gray background.

With the custom preview layout, the canvas preview will only show you the content of that view, with the preview message “Hello”:

Working on the main conversation view

You can now edit the ContentView.swift file to render your custom scroll view.

First rename your ContentView to ConversationView using the new Xcode refactoring.

Then, you can prepare your list of messages in the conversation and render then in a VStack. We put that VStack in a standard scroll view, to be able to see all the messages by scrolling the Vstack inside the scroll view.

//  ConversationView.swift
//  SwiftUI-ScrollView-Demo
import SwiftUI

struct ConversationView: View {
    var conversation: Conversation

    var body: some View {
        NavigationView {
            ScrollView {
                VStack(spacing: 8) {
                    ForEach(self.conversation.messages) { message in
                        return BubbleView(message: message.body)

struct ContentView_Previews: PreviewProvider {
    static var previews: some View {
        ConversationView(conversation: demoConversation)

The preview is getting our demoConversation to render our example conversation.

Note that you also need to edit your SceneDelegate to pass the demoConversation as a parameter when setting up your ConversationView:

//  SceneDelegate.swift
// ...
        // Create the SwiftUI view that provides the window contents.
        let contentView = ConversationView(conversation: demoConversation)
// ...

We now render all the message in our demo Conversation, but we see that the conversation is top aligned and there is no API at the moment to control the content offset to render the display from the bottom of the scroll view on init.

We will fix that in a moment by writing a custom scroll view.

Bootstraping our custom scroll view

Add a new SwiftUI file called ReverseScrollView.swift in the project.

You can then first create your ReverseScroll View to first adapt the VStack to the parent view geometry, thanks to GeometryReader. By wrapping the content of the ReverseScrollView inside a GeometryReader wrapper, you can access info about the “outer” geometry (like the height).

Here is an initial version of the ReverseScrollView:

//  ReverseScrollView.swift
//  SwiftUI-ScrollView-Demo
//  Created by Mickaël Rémond on 24/09/2019.
//  Copyright © 2019 ProcessOne. All rights reserved.
import SwiftUI

struct ReverseScrollView<Content>: View where Content: View {
    var content: () -> Content

    var body: some View {
        GeometryReader { outerGeometry in
            // Render the content
            //  ... and set its sizing inside the parent
            .frame(height: outerGeometry.size.height)

struct ReverseScrollView_Previews: PreviewProvider {
    static var previews: some View {
        ReverseScrollView {
            BubbleView(message: "Hello")

You can also replace the ScrollView in ConversationView to use our ReverseScrollView:

        NavigationView {
            ReverseScrollView {
                VStack {

In the Canvas, you can see the view is not scrolling or displayed properly with the last message at the bottom, but it is now properly fitting inside its parent view.

Aligning our view content to the bottom of the ReverseScrollView

The next step is to use preferences to pass the size of the content view to our ReverseScrollView. This will allow us to align the content of the view to the bottom of our custom ScrollView.

To do that we will leverage a SwiftUI feature called preferences. The preferences will be used to track the content size in the ReverseScrollView to properly set the content offset so that it is bottom aligned.

To track the content view height, we need to define a PreferenceKey that will keep track of the total height of the view. It will sum up the value of the height of all subviews in its reduce static function. To do so, add the following code to your ReverseScrollView file:

struct ViewHeightKey: PreferenceKey {
    static var defaultValue: CGFloat { 0 }
    static func reduce(value: inout Value, nextValue: () -> Value) {
        value = value + nextValue()

You then need to make that ValueHeightKey a view modifier that will use a few tricks to read the content size and propagate the value:

  • The view modifier is embedding a geometry reader in our content background to read the geometry. It will work, as the size of the background content is the same as the content itself.
  • The view modifier will then set the Color.clear preference for that key to propagate them to the parent, listening to them using onPreferenceChange event. We are setting Color.clear preference, as we need to generate a view here and we actually want to hide that background. This trick makes it possible to read and propagate the preference, using a “dummy” background view.

Here is the view modifier extension for our ViewHeightKey:

extension ViewHeightKey: ViewModifier {
    func body(content: Content) -> some View {
        return content.background(GeometryReader { proxy in
            Color.clear.preference(key: Self.self, value: proxy.size.height)

Finally, we need to keep track of that Content View Height in a ReverseScrollView state. To do so:

  • We add a ContentHeight state to our ReverseScrollView.
  • We apply our view modifier ViewHeightKey to the content view.
  • We set our contentHeight State in the onPreferenceChange event for the ViewHeightKey values.
  • We update the content offset to the y axis on the content. To calculate the offset, we use the following offset function. It is using scrollview height and content height to calculate the offset so that the content is bottom-aligned (see below).
    // Calculate content offset
    func offset(outerheight: CGFloat, innerheight: CGFloat) -> CGFloat {
        print("outerheight: \(outerheight) innerheight: \(innerheight)")

        let totalOffset = currentOffset + scrollOffset
        return -((innerheight/2 - outerheight/2) - totalOffset)

The content view is now bottom-aligned and the last message (Message 39) is properly displayed at the bottom of our custom scroll view.

You can check the final code in ReverseScrollView.swift.

Making our scroll view scrollable

The final step is to make our custom scroll view able to scroll, synchronized with vertical drag events.

First, we need to add two new states to keep track of the scroll position:

  • The current scroll offset set the content offset after the drag event ended.
  • The scroll offset is used to synchronize the content offset while the user is still dragging the view.

We will update those two states during on drag events onChanged and onEnded.

Here is the operation on ongoing drag event:

    func onDragChanged(_ value: DragGesture.Value) {
        // Update rendered offset
        print("Start: \(value.startLocation.y)")
        print("Start: \(value.location.y)")
        self.scrollOffset = (value.location.y - value.startLocation.y)
        print("Scrolloffset: \(self.scrollOffset)")

and when drag ends, we store the current position, enforcing top and bottom limits:

    func onDragEnded(_ value: DragGesture.Value, outerHeight: CGFloat) {
        // Update view to target position based on drag position
        let scrollOffset = value.location.y - value.startLocation.y
        print("Ended currentOffset=\(self.currentOffset) scrollOffset=\(scrollOffset)")

        let topLimit = self.contentHeight - outerHeight
        print("toplimit: \(topLimit)")

        // Negative topLimit => Content is smaller than screen size. We reset the scroll position on drag end:
        if topLimit < 0 {
             self.currentOffset = 0
        } else {
            // We cannot pass bottom limit (negative scroll)
            if self.currentOffset + scrollOffset < 0 {
                self.currentOffset = 0
            } else if self.currentOffset + scrollOffset > topLimit {
                self.currentOffset = topLimit
            } else {
                self.currentOffset += scrollOffset
        print("new currentOffset=\(self.currentOffset)")
        self.scrollOffset = 0

We also need to update the offset calculation to take into account the drag states:

    // Calculate content offset
    func offset(outerheight: CGFloat, innerheight: CGFloat) -> CGFloat {
        print("outerheight: \(outerheight) innerheight: \(innerheight)")

        let totalOffset = currentOffset + scrollOffset
        return -((innerheight/2 - outerheight/2) - totalOffset)

Finally, you need to track the gesture on the content view, and link those gestures to our drag function:

            // ...
                    .onChanged({ self.onDragChanged($0) })
                    .onEnded({ self.onDragEnded($0, outerHeight: outerGeometry.size.height)}))

We used the opportunity to also apply some animation to smoothen the end drag scroll position correction when hitting the limit. We now have a custom scroll view that starts at the bottom and can be properly scrolled with proper top and bottom limits.

Final project

You can download the final project example from Github: SwiftUI-ScrollView-Demo.

What’s next?

Let us know in the comments if you are interested in follow-up blog posts. Here are possible additional features that could make sense to illustrate in detail:

  • Handle device rotation
  • Show how to add messages to the conversation
  • Kinetic scroll with deceleration
  • Better bounce when hitting limits
  • Add a scroll bar
  • Kinetic animation for chat bubbles when scrolling (bit of springy behaviour)

Photo by Alvaro Reyes, Unsplash

by Mickaël Rémond at September 30, 2019 18:19

Prosodical Thoughts

Prosody 0.11.3 released

We are pleased to announce a new minor release from our stable branch. This is a bugfix release for the stable 0.11 branch. It is recommended for all users of 0.11.x to upgrade. Important note for those upgrading: Previous releases did not automatically expire messages from group chat (MUC) archives, so if mod_muc_mam was loaded and enabled for a MUC, archives would grow indefinitely. This is not what most deployments want, therefore automatic expiry is now implemented and enabled with a default 7 day retention.

by The Prosody Team at September 30, 2019 10:41

September 26, 2019


ejabberd 19.09

We are pleased to announce ejabberd version 19.09. The main focus has been to improve automatic certificate management stack (Let’s Encrypt). We also fixed bugs that had been introduced during previous big refactoring of the configuration management file, as well as the usual various bug fixes.

New Features and improvements

Better ACME support

In this release ACME support has been significantly improved. ACME is used to automatically obtain SSL certificates for the domains served by ejabberd.

The newest version of ACME (so called ACMEv2) is now supported. The implementation is now much more robust, and is able to perform certificate requests and renewals in a fully automated mode.

The automated mode is enabled by default, however, since ACME requires HTTP challenges (i.e. an ACME server will connect to ejabberd server on HTTP port 80 during certificate issuance), some configuration of ejabberd is still required. Namely, an HTTP listener for ejabberd_http module should be configured on non-TLS port with so called “ACME well known” request handler:

    module: ejabberd_http
    port: 5280
      /.well-known/acme-challenge: ejabberd_acme

Note that the ACME protocol requires challenges to be sent on port 80. Since this is a privileged port, ejabberd cannot listen on it directly without root privileges. Thus you need some mechanism to forward port 80 to the port defined by the listener (port 5280 in the example above).

There are several ways to do this: using NAT or HTTP front-ends (e.g. sslh, nginx, haproxy and so on). Pick one that fits your installation the best, but DON’T run ejabberd as root.

If you see errors in the logs with ACME server problem reports, it’s highly recommended to change ca_url option of section acme to the URL pointing to some staging ACME environment, fix the problems until you obtain a certificate, and then change the URL back and retry using request-certificate ejabberdctl command (see below).

This is needed because ACME servers typically have rate limits, preventing you from requesting certificates too rapidly and you can get stuck for several hours or even days.

By default, ejabberd uses Let’s Encrypt authority. Thus, the default value of ca_url option is

and the staging URL will be

  ## Staging environment
  ## Production environment (the default):
  # ca_url:

The automated mode can be disabled by setting auto option of section acme to false:

  auto: false

In this case automated renewals are still enabled, however, in order to request a new certificate,
you need to run request-certificate ejabberdctl command:

$ ejabberdctl request-certificate all

If you only want to request certificates for a subset of the domains, run:

$ ejabberdctl request-certificate domain.tld,pubsub.domain.tld,,

You can view the certificates obtained using ACME:

$ ejabberdctl list-certificates
domain.tld /path/to/cert/file1 true /path/to/cert/file2 false

The output is mostly self-explained: every line contains the domain, the corresponding certificate file, and whether this certificate file is used or not. A certificate might not be used for several reasons: mostly because ejabberd detects a better certificate (i.e. not expired, or having a longer lifetime). It’s recommended to revoke unused certificates if they are not yet expired (see below).

At any point you can revoke a certificate: pick the certificate file from the listing above and run:

$ ejabberdctl revoke-certificate /path/to/cert/file

If the commands return errors, consult the log files for details.


Some people have reported having issues to connect to the web administration console. To solve that, the need to connect using a URL with domain corresponding to an XMPP domain has been reverted.

Technical changes

Erlang/OTP requirement

Erlang/OTP 19.3 is now the minimum supported Erlang version for this release.

Database schema changes

There is no change to perform on the database to move from ejabberd 19.08 to ejabberd 19.09. Still, as usual, please, make a backup before upgrading.

Download and install ejabberd 19.09

The source package and binary installers are available at ProcessOne. If you installed a previous version, there are no additional upgrade steps, but as a good practice, plase backup your data.

As usual, the release is tagged in the Git source code repository on Github. If you suspect that you’ve found a bug, please search or fill a bug report in Issues.

Full changelog

* Admin
– The minimum required Erlang/OTP version is now 19.3
– Fix API call using OAuth (#2982)
– Rename MUC command arguments from Host to Service (#2976)

* Webadmin
– Don’t treat ‘Host’ header as a virtual XMPP host (#2989)
– Fix some links to Guide in WebAdmin and add new ones (#3003)
– Use select fields to input host in WebAdmin Backup (#3000)
– Check account auth provided in WebAdmin is a local host (#3000)

– Improve ACME implementation
– Fix IDA support in ACME requests
– Fix unicode formatting in ACME module
– Log an error message on IDNA failure
– Support IDN hostnames in ACME requests
– Don’t attempt to create ACME directory on ejabberd startup
– Don’t allow requesting certificates for localhost or IP-like domains
– Don’t auto request certificate for localhost and IP-like domains
– Add listener for ACME challenge in example config

* Authentication
– JWT-only authentication for some users (#3012)

– Apply default role after revoking admin affiliation (#3023)
– Custom exit message is not broadcast (#3004)
– Revert “Affiliations other than admin and owner cannot invite to members_only rooms” (#2987)
– When join new room with password, set pass and password_protected (#2668)
– Improve rooms_* commands to accept ‘global’ as MUC service argument (#2976)
– Rename MUC command arguments from Host to Service (#2976)

– Fix transactions for Microsoft SQL Server (#2978)
– Spawn SQL connections on demand only

* Misc
– Add support for XEP-0328: JID Prep
– Added gsfonts for captcha
– Log Mnesia table type on creation
– Replicate Mnesia ‘bosh’ table when nodes are joined
– Fix certificate selection for s2s (#3015)
– Provide meaningful error when adding non-local users to shared roster (#3000)
– Websocket: don’t treat ‘Host’ header as a virtual XMPP host (#2989)
– Fix sm ack related c2s error (#2984)
– Don’t hide the reason why c2s connection has failed
– Unicode support
– Correctly handle unicode in log messages
– Fix unicode processing in ejabberd.yml

by Jérôme Sautret at September 26, 2019 15:02

Ignite Realtime Blog

Openfire 4.4.2 Release

@akrherz wrote:

The Ignite Realtime Community is happy to announce the promotion of release 4.4.2 of Openfire. This release signifies our effort to stablize a 4.4 branch of Openfire while work continues on the next feature release. A changelog exists denoting the 22 Jira issues resolved since the 4.4.1 release. This release should better support server to server (s2s) connections, fix a few admin console XSS-style issues, and improve client session stability.

You can find downloads available with the following sha1sum values for the release artifacts.

f0d116fa699cb0668cf5761e888b77031edbca75  openfire-4.4.2-1.i686.rpm
5ebb03c6d7531bf181fa70b86270f11b31650c5b  openfire-4.4.2-1.noarch.rpm
d332038208197fbdd6d2e96ade2262e82c3faa1a  openfire-4.4.2-1.x86_64.rpm
6b4796507f337536a0d2e138f482c5817a346911  openfire_4.4.2_all.deb
be3a7c14f9670dfcf3b34a125387420f277f7bd3  openfire_4_4_2_bundledJRE.exe
41661466dbff8611628edcdddf53025f3039fe80  openfire_4_4_2_bundledJRE_x64.exe
2eba17818b834fd7fce1a2e5610be1ca16c47df4  openfire_4_4_2.dmg
d9e7504d363df4534b02c87ffcacb3c70748809a  openfire_4_4_2.exe
47c80c7c365f6980e3719f07a3ac32a03cc6a20d  openfire_4_4_2.tar.gz
40281cbb650bdc45e899b533a4f44fb6d9d32dbd  openfire_4_4_2_x64.exe
eeffaa918c1de50c833cd33f22550024ab4fd40b  openfire_src_4_4_2.tar.gz

Please let us know in the Community Forums of any issues you have and we are always looking for folks interested in helping out with development, documentation, and testing of Openfire. Considering stopping by our web support group chat and say hi!

For other release announcements and news follow us on Twitter

Posts: 1

Participants: 1

Read full topic

by @akrherz daryl herzmann at September 26, 2019 02:18

September 25, 2019

Ignite Realtime Blog

Monitoring Service plugin 1.8.1 released

@wroot wrote:

The Ignite Realtime community is happy to announce the immediate release of version 1.8.1 of the Monitoring Service plugin for Openfire!

This hotfix update adds protection against XSS attacks on Archiving Settings page.

Your instance of Openfire should automatically display the availability of the update in the next few hours. Alternatively, you can download the new release of the plugin at the Monitoring Service plugin archive page.

For other release announcements and news follow us on Twitter

Posts: 1

Participants: 1

Read full topic

by @wroot wroot at September 25, 2019 20:07

Search plugin 1.7.3 released

@wroot wrote:

The Ignite Realtime community is happy to announce the immediate release of version 1.7.3 of the Search plugin for Openfire!

This update adds protection against CSRF and XSS attacks.

Your instance of Openfire should automatically display the availability of the update in the next few hours. Alternatively, you can download the new release of the plugin at the Search plugin archive page.

For other release announcements and news follow us on Twitter

Posts: 1

Participants: 1

Read full topic

by @wroot wroot at September 25, 2019 19:51

Fastpath Service plugin 4.4.5 released

@wroot wrote:

The Ignite Realtime community is happy to announce the release of version 4.4.5 of the Fastpath Servive plugin for Openfire!

This hotfix update fixes the blank page issue after installing 4.4.4.

Your instance of Openfire should automatically display the availability of the update. Alternatively, you can download the new release of the plugin at the Fastpath Service plugin’s archive page .

For other release announcements and news follow us on Twitter

Posts: 1

Participants: 1

Read full topic

by @wroot wroot at September 25, 2019 04:33

September 24, 2019


Using a local development trusted CA on MacOS

TLS certificates are so ubiquitous that you now very often need them even during the development phase.

Developers are thus used to create “self-signed” certificates and configure their client requiring TLS support to accept self-signed certificates. This can be fine for development: As both the client and the server are on the same computer, the risk of man-in-the-middle attacks is limited. However, this can still be a dangerous setup, as you take the risk to forget the option and ship to production a client that accepts self-signed certificates. It would thus be vulnerable to man-in-the-middle attacks and defeat the purpose of using TLS in the first place.

The best approach is to create your own local certificate authority for development and to have your development computer trust that root CA. This article will show how to configure such an environment on MacOS with both ejabberd and Phoenix. If you are developing on Linux however, you should be able to use mkcert as well and can skip the part on setting up the iOS simulator.

Note: Be careful not to share your root CA key, as it could be exploited to run targeted man-in-the-middle attacks against your development computer.

Creating the local trusted Certificate Authority

�The mkcert project is making this setup very easy.

First, install mkcert on your Mac, for example with Homebrew:

$ brew install mkcert
$ brew install nss # if you use Firefox

Then, create your CA root certificate and install it in your trusted store (it will ask your admin password):

$ mkcert -install
Using the local CA at "/Users/mremond/Library/Application Support/mkcert" ✨
The local CA is now installed in the Firefox trust store (requires browser restart)! 🦊

Finally, create your signed certificate for your localhost:

$ mkcert localhost ::1
Using the local CA at "/Users/mremond/Library/Application Support/mkcert" ✨

Created a new certificate valid for the following names 📜
 - "localhost"
 - ""
 - "::1"

The certificate is at "./localhost+2.pem" and the key at "./localhost+2-key.pem" ✅

You should now be able to use Safari or Firefox with a service using that new certificate without having any SSL trust warning.

Installing your root certificate in your iOS simulator and development device

When developing iOS applications, you have to use HTTPS or secure Websockets (or have to set a temporary exception in your project info.plist). Reusing your local certificate authority is thus the best approach as well during development. You just have to tell iOS to trust your local CA.

  1. Drag and drop the rootCA.pem file on your simulator. That file is locating in the directory ~/Library/Application Support/mkcert. The simulator will open Safari and offer you to download the file.

  2. Accept the file download. Safari will then confirm that the file has been added to your profiles.

You now need to install it from the setting to trust it.

  1. Go to the Settings app and then to “General -> Profile”. Select your new mkcert profile and click install and then confirm installation. The root certificate should now be displayed as verified.

  2. Still in the Settings app, you can now go to “General -> About -> Certificate Trust Settings” to enable full trust for that root certificate.

You will now have no trust warning when you connect on HTTPS on localhost on a site using your signed cert. You can also use HTTPS or Secure Websockets from a development iOS app, without any workaround.

You can use a similar approach on your development device to have your device download the rootCA.pem file and ensure the OS will trust it.

Configuring your server applications to use your signed certificates


Configuring ejabberd to use your signed development certificate can be done copying your localhost+2.pem and localhost+2-key.pem file to ejabberd config directory and referring to them in the config file:

  - localhost


  - localhost+2-key.pem
  - localhost+2.pem

    port: 5222
    ip: "::"
    module: ejabberd_c2s
    max_stanza_size: 262144
    shaper: c2s_shaper
    access: c2s
    starttls_required: true
    port: 5269
    ip: "::"
    module: ejabberd_s2s_in
    max_stanza_size: 524288
    port: 5443
    ip: "::"
    module: ejabberd_http
    tls: true
      "/admin": ejabberd_web_admin
      "/api": mod_http_api
      "/bosh": mod_bosh
      "/captcha": ejabberd_captcha
      "/upload": mod_http_upload
      "/ws": ejabberd_http_ws
      "/oauth": ejabberd_oauth

You should now be able to use ejabberd https service on https://localhost:5443 without any warning. You thus can use secure Websocket and BOSH over HTTPS.


Copy your localhost key and cert to your Phoenix project directory priv/cert:

$ mkdir priv/cert
$ mv localhost+2* priv/cert

Update the file config/dev.exs to use your localhost development key and cert. Add your https configuration in your endpoint config:

  https: [
    port: 4001,
    cipher_suite: :strong,
    keyfile: "priv/cert/localhost+2-key.pem",
    certfile: "priv/cert/localhost+2.pem"

Now, when you start your development server with mix phx.server, it will support a new trusted https service on port 4001.

Photo by Christopher Gower, Unsplash.

by Mickaël Rémond at September 24, 2019 10:07

September 21, 2019

Monal IM

Monal 4 out

It was an odd week but monal 4 with iOS 13 support and dark mode is out. I’ll put out the equivalent Mac update this week and then back to my planned roadmap.

by Anu at September 21, 2019 11:44

September 20, 2019


Uniting global football fans with an XMPP geocluster

When you are running one of the top sport brands, launching a new innovative app always means it comes with great expectations from your fans.

That’s why highly recognised brands turn to ProcessOne. They need to be sure that the project will launch in time, will perform well and will not collapse under the load, no matter what happens.

In early 2014, a top brand contacted us with a clear and ambitious project in mind. We had 6 months to develop and host the infrastructure for a large-scale realtime communication and messaging project. The deadline was not flexible. The app needed to be live just before the FIFA World Cup in Brazil.

ProcessOne worked with the customer mobile team to help them accelerate their development. XMPP can be a complex protocol and if you wish to launch with a tight deadline, you need to take a “fast lane” approach and get help directly from experts.

We also had to host the project, providing low-latency for connection and messaging everywhere in the world. We deployed a geocluster, handling a single XMPP domain in four data centers across the world. We operated our deployment in 4 AWS regions: USA to serve North America, Brazil to serve South America, Singapour to serve Asia and Ireland to serve Europe.

We used our custom Erlang component to deploy a geocluster at scale, handling ourselves all database synchronization across regions and auto-recovery in case of netsplits.

The application was ready just in time to handle the load of fans installing and trying the app. Soon after the launch we had to handle an unexpected 10x peak load when Cristiano Rolnaldo, with tens of millions of followers, retweeted about the app — and the platform handled it without a flinch!

With a 100% uptime, the project was a success, and the app gathered hundreds of thousands of football fans from all over the world. Did you guess which brand it was?

Areas of expertise:
– Realtime messaging
– Push notifications
– Platform management / hosting

– Backend: Erlang / XMPP / ejabberd

by Marek Foss at September 20, 2019 14:51

Monal IM

A mad dash to the finish

For the past week i have been trying to figure out why Monal stopped working in iOS 13. Building with the iOS13 SDK is the only way to get dark mode (and french app store support) . It appears that when I do build with this SDK it disables the push mechanism I use. the replacement does not work well. I wasn’t planning on needing to get it working until mid 2020. I am still allowed to submit builds with the iOS12 SDK. I may have to sacrifice dark mode and other iOS 13 niceties for now until I have the core messaging at parity. I did manage to get it close and the server infrastructure is set up to handle it so I will keep working on that. I’ll probably keep two different test versions going 4.0 for iOS 12 builds that make it to the app store and 4.1 for ios13 builds that use the new mechanism.

by Anu at September 20, 2019 02:19

September 19, 2019


Fluux and ejabberd BE support iOS 13 Apple Push Notification System from day one

Yesterday, Apple unveiled the new iPhones and set September 19 as the date for the release of iOS 13. This new operating system brings lots of new features and changes, but among them there’s one easy-to-miss update that’s very significant for messaging server operators: an update to APNS requests that enable sending push notifications to iOS clients.

We are happy to announce that today we deploy an update to how our services, like Fluux and software like ejabberd Business Edition handles APNS requests, so all our endpoints are compatible and support iOS 13 APNS from day one.

Technical details

This small but significant update means adding support for the new apns-push-type parameter while sending the request to the APNS. And of course, a respective interface for our customers to be able to set and manipulate the said parameter.

It’s significant because, as the Apple docs state, the new parameter is “Required when delivering notifications to devices running iOS 13 and later, or watchOS 6 and later. (…) The value of this header must accurately reflect the contents of your notification’s payload. If there is a mismatch, or if the header is missing on required systems, APNs may delay the delivery of the notification or drop it altogether.”

As you can see, the implementation has to be carefully executed and tested. Thanks to our expert team and proven processes, we were able to prepare the updates to our services and software quickly and deploy them today, ahead of iOS 13 launch.

Important note: If you are using ejabberd Business Edition, you only need to update if you are already using ejabberd with APNS v3. If you are using the legacy APNS v2 service, you do not have to update at the moment.

Photo by Jamie Street, Unsplash

by Marek Foss at September 19, 2019 10:32

We are not an Erlang company

ProcessOne has made a mark on Erlang history, developing reference software in Erlang, providing strong Erlang expertise and helping grow the its ecosystem. Still, ProcessOne is much more than our Erlang fame. We are technology agnostic. We are great at selecting the right tool for the job to build innovative projects for our customers.


ProcessOne is much more than an Erlang company and we can help you develop your projects in Go, Swift and Kotlin.

Our Erlang history

ProcessOne is an Erlang pioneer. The core team have been developing in Erlang since it was released as Open Source in 1998. We have been building the most famous Erlang project, the real-time messaging platform: ejabberd. Consequently, RabbitMQ followed suite because ejabberd led the way.

We used Erlang very early because it was the only way, at that time, to build highly robust and scalable platforms. ejabberd is able to handle a massive number of connections, supports clustering, get hot code upgrades – all because of the underlying Erlang VM.

We have been jumping on the Elixir bandwagon very soon for quite the same reasons. It is a technology that fills a niche, making it possible to build large scale real-time web platforms with a syntax that feels more familiar (especially for Ruby developers).

Innovation is our driver

When we started ProcessOne, there was no Amazon Web Service. Servers were multiple times slower than they are today. There was no Docker, no Kubernetes. Basically, there was no alternative to Erlang VM if you wanted to build a scalable and manageable service. And most of all, there was no iPhone and Android, whose support is now a critical component of many projects.

So, over the years, we had to refine what innovating for our customers meant, integrating new technologies in our skill set and our stack.

In the context of our customers’ projects, Erlang or even Elixir is not always the best answer. It is hard to find Erlang or Elixir skills and training an Erlang or an Elixir team is most of the time not desirable for customers. They often need to hire and train large set of developers and sysadmins. They need to have a path for innovation to fit their corporate culture. And finally, most of all, Erlang and Elixir are not enough to cover the client-side development, especially on mobile devices.

That’s why we had to refine our technology stack with innovation as our main focus.

Expanding our technology stack

Over the years, we had to expand our technology stack to have a good balance between innovation and ability for our customers to adopt it. We ended up developing the following additional skills to handle our customer projects:

  • Go to handle server-side services and web applications. Go is very popular and provides good scalability and maintainability. Even if Go seems approachable and easy, writing very good Go code that feels like “native Go” and is maintainable, is hard. However, it is accessible for our customers and we have found it was quite easy to transfer the knowledge to customers’ teams. Once we have laid the basis and delivered a strong Go applications, it is easy for our customers to take over the projects. It works well with our Erlang components, as our customers can use them as black boxes.
  • Swift to handle iOS native developments. This is the de facto standard in the Apple ecosystem. Given our high-profile customers target, cross-platform development is generally not enough. That’s why we are investing in providing state-of-the-art native code, following the latest standards on iOS/MacOS/Watch/TvOS.
  • Kotlin to handle native Android developments. We target high-end projects that needs native developments with the latest features. Kotlin is the way to deliver future-proof code on Android.

We are still working on Erlang projects and you can expect the same involvement in ejabberd. Still, you can also expect us to talk more about our other skills and see new projects in those languages. Our Open Source software stack will get richer and serve as basic building bricks for our customers’ projects.


Given our unique Erlang history, we are often still seen as an Erlang company. While we have a unique expertise and a part of our customer-base work with us because of that Erlang or Elixir expertise, limiting ProcessOne to this small set of technologies could not be further from the truth.

We deliver full projects for customers that need to deliver innovative services, in record time. Finally, we hand over that innovation so that it can live for many years after we delivered our result.

Working with ProcessOne is like having your own team of R&D experts, that can bootstrap your project to help your reach the market before the competition, without compromise on the technology.

Do not hesitate to contact us if you need help building your projects, with the confidence that you will be able to integrate the innovation internally.

by Mickaël Rémond at September 19, 2019 09:17

September 18, 2019

Monal IM

Monal 4

IOS 13 comes out tomorrow and Monal 4 will be out in the next day or so. I was hoping to be ready day 1 this time and I was for dark mode but the push issues took me by surprise. I expected to have until 2020 to resolve it. Monal as it is in the app store, Monal won’t work with iOS 13, the OS will block the voip pushes. The updated version will work with a new push server that does not send voip pushes anymore. I have posted that to the beta channel today. It just tells you there is a new message and doesn’t have the message text or sender. It’s not great but the alternative is not knowing when you have messages. I will continue to work on the notification extension I am using for these messages and will update it in the coming week hopefully we will get similar behavior to iOS 12 soon.

Have I mentioned how much I hate Facebook messing this up for everyone.

by Anu at September 18, 2019 23:36

Ignite Realtime Blog

Fastpath Service plugin 4.4.4 released

@wroot wrote:

The Ignite Realtime community is happy to announce the release of version 4.4.4 of the Fastpath Servive plugin for Openfire!

This update fixes exception when pressing on workgroup name, makes it possible to build plugin with Maven and adds an ad-hoc command to update workgroup.

Your instance of Openfire should automatically display the availability of the update. Alternatively, you can download the new release of the plugin at the Fastpath Service plugin’s archive page

For other release announcements and news follow us on Twitter

Posts: 1

Participants: 1

Read full topic

by @wroot wroot at September 18, 2019 19:14


ejabberd 19.08

We are pleased to announce ejabberd version 19.08. The main focus has been to further improve ease of use, consistency, performance, but also to start cleaning up our code base. As usual, we have kept on improving server performance and fixed several issues.

New Features and improvements

New authentication method using JWT tokens

You can now authenticate with JSON Web Token (JWT, see for details).

This feature allows you to have your backend generate authentication tokens with a limited lifetime.

Generating JSON Web Key

You need to generate a secret key to be able to sign your JWT tokens.

To generate a JSON Web Key, you can for example use JSON Web Key Generator, or use your own local tool for production deployment.

The result looks like this:

  "kty": "oct",
  "use": "sig",
  "k": "PIozBFSFEntS_NIt...jXyok24AAJS8RksQ",
  "alg": "HS256"

Save you JSON Web Key in a file, e.g. secret.jwk.

Be careful: This key must never be shared or committed anywhere. With that key, you can generate credentials for any users on your server.

Configure ejabberd to use JWT auth

In ejabberd.yml change auth_method to jwt and add the jwt_key option pointing to secret.jwk:

auth_method: jwt
jwt_key: "/path/to/jwt/key"

Generate some JWT tokens

See for an example of how to generate JSON Web Tokens. The payload must look like this:

  "jid": "",
  "exp": 1564436511

And the encoded token looks like this:


Authenticate on XMPP using encoded token as password

Now, the user can use this token as a password before 1564436511 epoch time (i.e. July 29, 2019 21:41:51 GMT).

New configuration validator

With ejabberd 19.08, we introduce a new configuration checker, giving more precise configuration guidance in case of syntax errors or misconfiguration. This configuration checker has also been released as an independent open source project: yconf.

The new configuration validator makes it possible to improve the configuration parsing. For example, it supports the following:

Better handling of Erlang atom vs string

There is not need to quote string to express the fact you want an atom in the configuration file: the new configuration validator handles the Erlang types mapping automatically.

More flexible ways to express timeouts

Now, all timeout values can be expanded with suffixes, e.g.

negotiation_timeout: 30s
s2s_timeout: 10 minutes
cache_life_time: 1 hour

If the suffix is not given, the timeout is assumed in seconds

Atomic configuration reload

The configuration will either be fully reloaded or rolled back.

Better, more precise error reporting

Here are a couple of examples of the kind of message that the new configuration validator can produce.

In the following example, the validator will check against a value range:

14:15:48:32.582 [critical] Failed to start ejabberd application: Invalid value of option loglevel: expected integer from 0 to 5, got: 6

More generally, it can check value against expected types:

15:51:34.007 [critical] Failed to start ejabberd application: Invalid value of option modules->mod_roster->versioning: expected boolean, got string instead

It will report invalid values and suggest fixes in case error was possibly due to a typo:

15:50:06.800 [critical] Failed to start ejabberd application: Invalid value of option modules->mod_pubsub->plugins: unexpected value: pepp. Did you mean pep? Possible values are: flat, pep

Prevent use of duplicate options

Finally, it will also properly fail on duplicate options and properly report the error:

15:56:35.227 [critical] Failed to start ejabberd application: Configuration error: duplicated option: s2s_use_starttls

It was a source of error as an option could shadow another one, possibly in an included file.

Improved scalability

We improve scalability of several modules:

Multi-User chat

MUC Room modules is more scalable, allowing supporting more rooms, by hibernating the room after a timeout. Hibernating means removing it from memory when not used and reloading it on-demand.

The MUC messages processing has also been changed to now properly handle all the available CPU cores. MUC room message handling is now faster and support larger throughput on SMP architectures

SQL database handling

We improve the way the SQL pool is managed to better handle high load. We also improved MySQL schema a bit to help with indexing.

Changed implementation of mod_offline option use_mam_for_storage

Previous version was trying to determine the range of messages that should be fetched from MAM by storing time when last user resource disconnected. But that had couple edge cases that could cause problems, for example in case of hardware node crash we could not store information about user disconnect and with that we didn’t have data to initiate MAM query.

The new version doesn’t track user disconnects, but simply ensure that we have timestamp of first message that is gonna be put in storage, after some measurements cost of that check with caching on top is not that costly, and as it is much more robust we decided to move to that safer approach.

New option captcha_url

Option captcha_host is now deprecated in favor of captcha_url. However, it’s not replaced automatically at startup, i.e. both options are supported with ‘captcha_url’ being the preferred one.

Deprecated ‘route_subdomains’ option

This option was introduced to fulfil requirements of RFC3920 10.3, but in practice it was very inconvenient and many admins were forced to change its value to ‘s2s’ (i.e. to behaviour that violates the RFC). Also, it seems like in RFC6120 this requirement no longer presents.

Those admins who used this option to block s2s with their subdomains can use ‘s2s_access’ option for the same purpose.

API changes

Renamed arguments from ‘Server’ to ‘Host’

Several ejabberd commands still used as argument name ‘Server’, instead of the more common ‘Host’. Such arguments have been renamed, and backward support allows old calls to work correctly.

The eight affected commands are:
– add_rosteritem
– bookmarks_to_pep
– delete_rosteritem
– get_offline_count
– get_presence
– get_roster
– remove_mam_for_user
– remove_mam_for_user_with_peer

If you are using these calls, please start updating your parameter names to Host when moving to ejabberd 19.08. You will thus use a more consistent API and be future proof.

Technical changes

Removed Riak support


  • Riak DB development is almost halted after Basho
  • riak-erlang-client is abandoned and doesn’t work correctly with OTP22
  • Riak is slow in comparison to other databases
  • Missing key ordering makes it impossible to implement range queries efficiently (e.g. MAM queries)

If you are using Riak, you can contact ProcessOne to get assistance migrating to DynamoDB, an horizontally scalable key value datastore made by Amazon.

Erlang/OTP requirement

Erlang/OTP 19.1 is still the minimum supported Erlang version for this release.

Database schema changes

There is no change to perform on the database to move from ejabberd 19.05 to ejabberd 19.08.
Please, make a backup before upgrading.

It means that an old schema for ejabberd 19.05 will work on ejabberd 19.08. However, if you are using MySQL, you should not that we changed the type of the server_host field to perform better with indexes. The change is not mandatory, but changing it to varchar(191) will produce more efficient indexes.

You can check the upgrade page for details: Upgrading from ejabberd 19.05 to 19.08

Download and install ejabberd 19.08

The source package and binary installers are available at ProcessOne. If you installed a previous version, please read ejabberd upgrade notes.
As usual, the release is tagged in the Git source code repository on Github. If you suspect that you’ve found a bug, please search or fill a bug report in Issues.

Full changelog

– Improve ejabberd halting procedure
– Process unexpected Erlang messages uniformly: logging a warning
– mod_configure: Remove modules management

– Use new configuration validator
– ejabberd_http: Use correct virtual host when consulting trusted_proxies
– Fix Elixir modules detection in the configuration file
– Make option ‘validate_stream’ global
– Allow multiple definitions of host_config and append_host_config
– Introduce option ‘captcha_url’
– mod_stream_mgmt: Allow flexible timeout format
– mod_mqtt: Allow flexible timeout format in session_expiry option

– Fix SQL connections leakage
– New authentication method using JWT tokens
– extauth: Add ‘certauth’ command
– Improve SQL pool logic
– Add and improve type specs
– Improve extraction of translated strings
– Improve error handling/reporting when loading language translations
– Improve hooks validator and fix bugs related to hooks registration
– Gracefully close inbound s2s connections
– mod_mqtt: Fix usage of TLS
– mod_offline: Make count_offline_messages cache work when using mam for storage
– mod_privacy: Don’t attempt to query ‘undefined’ active list
– mod_privacy: Fix race condition

– Add code for hibernating inactive muc_room processes
– Improve handling of unexpected iq in mod_muc_room
– Attach mod_muc_room processes to a supervisor
– Restore room when receiving a message or a generic iq for not started room
– Distribute routing of MUC messages across all CPU cores

– Fix pending nodes retrieval for SQL backend
– Check access_model when publishing PEP
– Remove deprecated pubsub plugins
– Expose access_model and publish_model in pubsub#metadata

by Mickaël Rémond at September 18, 2019 06:40