Edit: I have received ambiguous feedbacks on this post. Some say it does not flow and some said it was exactly what they’ve been thinking. So I decided to let it be as a legacy post and add a TL;DR.

TL;DR;

Using an Agent-based approach to manage clients that connect to a server, reduces the amount of effort we put into handling the per/client state. Because our Agents can be stateful and the need for global state repositories (like maps) would come down to a minimum. And our servers are creating a dedicated go-routine for each client. The Agents could run inside those go-routines without a need to fire up extra ones.


Prologue

We have these servers, put in place, receiving streams of data - bytes, packets, states - from incoming connections and send back some outgoing responses to the other end of those connections.

Like TCP Servers, HTTP Servers or a Telegram Bot handling bytes, packets and incoming update states.

It’s all familiar. We just provide a handler, which will be given the incoming goods. Then it processes them and send back the result. The server uses this one handler to handle each connection’s affairs.

Server Arch

Now, the handler, our glorious unique single-instance handler, has to somehow manage the state for each client/connection, to provide some sort of logic - architectural, supervisional or app level, such as panics, metrics or session data.

Maybe we act upon session information for a multi-step app-level logic. Maybe we have a state machine for each client to figure out if she/he won! Maybe we have to drop inactive clients based on some timeout and open space for others.

We need a map. A map that maps each client to it’s state. For example if we are writing a TCP Server, we can just define a map[net.Conn]interface{} - thanks to Go, we can use net.Conn as a map key - and of-course we need a locking mechanism for accessing the map concurrently.

All good and nothing but familiar, right?

Analyze This & That

OK, when writing a TCP Server we can use net.Conn as the key - the id - to the client/connection context. What about a Telegram Bot? Thanks to Telegram servers, we have some unique user id and we can use it to access each active user’s context. And what about HTTP connections? There is gorilla/context for providing request context (I’m aware of that) - which also uses maps, of *http.Requests - and there is gorilla/sessions for a context with a longer duration.

So, it all comes down to how we relate This specific connection with That specific state? Obviously we need concept of id and a mean to, on one hand relate the id with the connection/client and on the other hand relate the id to the state/context.

There are different ways to achieve this. Like using maps (and locks) or like providing an object as a closure to a go-routine. Here, the go-routine provides the concept of id - the execution context of go-routine is unique for our object and our object lives nowhere but there - and actually a HTTP Server creates one go-routine per connection.

Server Arch

Having maps around, we need lots of locking/unlocking and of-course bulk housekeeping and for sure some cleaning up afterwards - not amortized pauses there, waiting to be paid for - which in some cases needs even more go-routines to get fired up.

But our server has already created one go-routine for each connection, right? Can’t we just employ them?

“Everything that has a beginning has an end, Neo”

And it would be nice if we were aware of this beginning and end and other events in between. Let’s open our own agency. Our agents are capable of performing certain tasks - no matter with what background - and provide a specific interface. In fact we are opening an Agent Factory.

An Agent is an interface like say type Agent interface{ ... } and our Agent Factory is a function like func(...) Agent.

Now we share our Agent Factory with our server for it can then assign a dedicated agent to each incoming connection.

Server Arch

Each dedicated go-routine - which are created by the server; anyway - calls our Agent Factory when starting it’s internal loop and gets an agent for itself. When new data arrives or anything happens, the go-routine pokes the agent to do it’s bidding. In fact the agent runs synchronously inside execution context of the go-routine.

Server Arch

Conclusion

Use factories. By employing an Agent Factory in our server, instead of a single instance handler, global maps are gone away. No more locking/unlocking per access, per event, per connection, no extra go-routines and no bulk housekeeping - which also reduces sudden bulk GC!

Eliminating global state always feels good and also pays off, in terms of both performance and simplicity - bottlenecks eliminated! As a sample implementation of this pattern, we can study this TCP Server. All we have to do is to implement an Agent.

type Agent interface {
	Proceed() error
}

And our server uses our Agent Factory func(...) Agent to create an Agent for each connection. Our agent does the all housekeeping and our server does the supervisional & architectural aspects of the housekeeping.

We solved the problem of id. Although the words connection, client and context can have complex relationships, in the context of this post, they served well. And we eliminated some global state that were very congestion-prone. There might be still some need for global state but they are not part of our logic, like metrics and throttling.

Eliminating global state always feels good and always pays off!