Kamailio as an SBC: five years on

In early 2013, more than five years ago, I wrote an article on my personal blog: Kamailio as an SBC (Session Border Controller). It was in response to the often-asked question in the Kamailio and open source-focused VoIP consulting arena about whether Kamailio is an SBC, or can be made to serve as an SBC.

Far from having been put to bed, the question rages on; we get it now more than ever, and certainly even more than at the time the article was written. In that original article, the essential thesis was: “Kind of, depending on what exactly you mean by ‘SBC'”.

That answer continues to be broadly accurate, in our view. However, five years of additional industry experience, observation of broad trends in our corner of the SIP platform-building universe, and Kamailio project evolution have certainly shifted the contours somewhat.

Let’s revisit the topic with an updated 2018 view.

Is it the right question?

In the original article, I made the point that there are two different understandings of “SBC” floating around out there. One is highly nuanced and product-specific, generally held by large telco types who are highly specialised on mainstream commercial SBC platforms. The other view, which enjoys much wider currency, is that of a carrier and/or customer endpoint-facing SIP element that performs traffic aggregation and routing in a rather generic sense. I argued that Kamailio is suitable to the latter, but falls rather short of what qualified specialists mean relative to the former.

Having had half a decade to ponder this, I’ve come to increasingly see it as an ontological problem. The marketing departments of major SBC vendors, starting from Acme Packet, have successfully convinced IP telecoms practitioners, in the enterprise market at least, that this thing called an “SBC” is the basic building block of a “carrier-grade” SIP service delivery platform. It’s a Swiss Army Knife routing box, a reassuring “voice firewall” for helpless Class 5 platforms exposed to the brutal storms and harsh, cold winds of the public Internet, a solution to the problem of juggling multiple signalling IPs, an adaptation layer for non-interoperable behaviour, a place where the vicissitudes of NAT can be sheathed off, and everything in between.

But more importantly, it’s just how it’s done. SBC is the word we use to describe the sort of thing that this eclectic grab-bag o’ SIP gateway is. Rare is a marketing triumph so total that it reshapes our mental categories and how we think about things at an almost metaphysical level, regardless of the objectively available options in underlying technology. It’s on par with the crystallisation of Kleenex® as a moniker for “paper tissue” here in America; only one kind of “paper tissue” is now allowed to exist. It pretty much has to come in a rectangular box with a plastic aperture on the top, regardless of brand. It’s risen to the level of conventional wisdom. All (non-bath) paper tissue putatively comes in such boxes. All tissue is Kleenex®.

In this scenario, I view that as a serious problem. Considering the wealth of concepts that exist in the market space of SIP platform-building, it’s rather grim that our answers about Kamailio in the carrier space so often have to be framed in terms of the SBC question.

A great many generic industrial uses of SBCs amount to little more than “SIP gateway + RTP relay + server-side NAT traversal intelligence in a box”. That’s a more accessible and open-minded framing of the desiderata. Talking to telco-heads about this stuff easily leads to the impression that this vocabulary is Ancient Greek, lost in long-forgot ancient manuscripts. It’s just all subsumed under “SBC”. I’m not trying to lead an anti-corporate revolt here, but from an engineering perspective, it’s really time to reclaim and re-democratise some of this language, lest it give way to trafficking exclusively in trademarked proper nouns of product names.

Some may say that’s rather tendentious, considering we’re an open-source VoIP platform consultancy and product vendor. Maybe so. Far it be from me to say that SBCs don’t have their place; they certainly do. But I think everyone would be well-served if people requesting a “Kamailio SBC replacement” took a step back and asked:

1. What do I actually need?

2. What is the SIP vernacular for that, from the point of view of someone conversant with SIP standards and market realities alike?

3. Does #2 actually compute to an “SBC” from Oracle, Genband, Sansay, Metaswitch, etc?

4. Do I actually need it to behave that way? Why?

5. Is it reasonable or desirable for Kamailio to behave that way?

6. Is there a compelling alternative that has different formal technical properties but substantially the same effect to the bottom line?

We’ve built our entire business on competent exploration of those kinds of issues.

By way of final addendum to the philosophical portion of this article, I will note that the colonisation of The Cloud™ by the service provider world has been productive inasmuch as it has increasingly exposed some of this notion, that Real VoIP Networks are built out of Big SBC Appliances, to be somewhat intellectually fraudulent. (Shameless plug: I gave a whole talk on ITSPs moving to the cloud at Kamailio World 2018 in Berlin recently.) Major SBC vendors, who lobbied so hard to establish this reflexive social convention of the Big SBC Appliance as the universal Lego block of service provider networks, are — without a trace of irony — putting out virtual machine images of their platforms in an effort to remain cloud-relevant.

If you can virtualise a Sonus SBC—it’s just SIP software!—who knows what else you can virtualise instead?

RTP relay

So much, then, for fighting the power.

My original article feebly pointed to rtpproxy as an RTP relay solution and implied that it cannot compete with the horsepower of ASIC-assisted RTP forwarding in proprietary boxes.

A few years ago now, SIPwise released RTPEngine, which most certainly can. RTPEngine uses kernel-mode packet forwarding, making use of the Linux kernel netfilter APIs, to achieve RTP relay at close to wire speed and in a manner which bypasses userspace I/O contention. It’s got a raft of other features, from SRTP/crypto support to WebRTC-friendly ICE, not to mention recent innovations (admittedly in user-space) in call recording and transcoding.

RTPEngine has been shown to be able to handle over 10,000 concurrent bidirectional RTP sessions on commodity hardware, and RTPEngine instances can be trivially scaled out of Kamailio without reloading or restarts, so if that call volume isn’t enough for you, just stack more gateways. It even supports replicating RTP session state to a redundant mate in real time using Redis.

Considering it’s under open-source licence, it’s hard to argue with the unit economics or port density of that, more especially on modern multi-core commodity hardware. It’s hard to see the necessity of dense ASIC-assisted RTP forwarding for cost scaling in 2018.

Topology hiding

If topology hiding for commercial reasons is your overriding concern, so that your customers can’t discover your vendors and vice versa, you’ve probably got it in your head that you need a B2BUA (back-to-back user agent).

It’s true; Kamailio is a SIP proxy, and that’s not going to change. Logical call leg A goes in, logical call leg A comes out, largely unadulterated.

Nevertheless, a great deal of work has gone into the topoh and topos modules, which take two different approaches to hiding the network addresses on the other side of Kamailio. Both approaches make use of Kamailio’s ability to remain in the signalling path, both in the context of a SIP transaction and the more persistent SIP dialog. Both comply with the fundamental dictum that a SIP proxy shall not alter the essential state-defining characteristics of a SIP message as constructed by the respective UAs (User Agents) in a manner that shall be known to those UAs.

By the very nature of the complex state management and sleight of the hand that these modules do, there are likely always going to be edge-cases where they don’t work as expected.

For those cases, I continue to recommend a high-performance signalling-only B2BUA in series to the call path. Although the community edition of SEMS (SIP Express Media Server) suffers from some neglect, I still wholeheartedly recommend its SBC module on the basis of sheer performance.

Registration

In the original article, I made the claim that many registrars don’t properly support the SIP Path extension. Experience suggests the number of these has dwindled, and Path is a very reasonable way to handle a scenario such as relaying registrations to specific hosted PBX instances inside a private network.

As has been the case for most of the project’s history, Kamailio continues to perform admirably as an actual registrar. If your architecture allows for the concentration of large amounts of registrations in a centralised registrar, this is the best bet.

Truly originating registrations is the province of the UAC module, and some additional management handles have been added to make the process more controllable in real time. Nevertheless, Kamailio cannot reasonably re-originate registrations on a one-to-one basis.

In large-scale platforms, there is significant demand out there for a
registration “middlebox” which can absorb high-frequency re-registrations and parlay them into lower-frequency re-registrations upstream. This requirement arises in large measure due to the irony that many expensive enterprise platforms cannot cope with high message volume nearly as well as open-source, and fall over easily in the face of the registration onslaughts from modern NAT’d customer environments.

OpenSIPS have taken the lead here with the mid-registrar module, which caters to this very need. It is possible to implement something like this manually with Kamailio, but it would take a great deal of state-keeping on the part of the route script programmer. A module to accommodate this niche may be forthcoming in the future.

Edit: As is so often the case, it turns out that perceived limitations are more a failure of the author’s knowledge than technology. While it is true that Kamailio does not have a module specifically named and geared toward the “registrar middlebox” role, Daniel-Constantin Mierla, the chief developer of Kamailio, helpfully pointed out to me in this post on the VoiceOps mailing list that Kamailio’s UAC module has existing functionality that can be used toward the same end. Additionally, one of the virtues of open-source is that enhanced functionality can be added in a reasonable time.

Replication and sharing state

One of the most exciting developments in Kamailio in recent years has been the introduction of DMQ, a SIP-transported distributed replication system with sensible node discovery and failover features.

Previously to DMQ, most Kamailio redundancy strategies involved reliance on shared database backing. This is an inefficient bottleneck and a significant I/O burden. DMQ presents us with the possibility of using in-memory storage backing for things like the registrar and cutting the database bottleneck out. Dialogs can also be replicated with DMQ, as can generic hash tables (frequently used as a distributed data store), and a number of other things. DMQ’s dynamic character is also very complementary to cloud architecture. Watch this space.

SIP over TCP and TLS transports has seen significantly increased uptake in recent years, and session state replication for those remains a sore topic for sophisticated connoisseurs. TCP sessions cannot be transferred from one Kamailio host to another in the event of a failure, as that would take significant operating system network stack involvement. It’s potentially important because we live in a NAT’d world where new TCP connections cannot be opened to customers on demand, instead requiring reuse of existing ones (conforming to the general thesis of SIP Outbound). That level of network stack coupling is achievable by the big SBC brands and remains a selling point, and although worst-case exposure is mitigated by low re-registration intervals, that puts pressure on the throughput side of the registrar and is subject to the minimum re-registration interval of many devices, which remains 60 seconds. Getting databases out of the registration persistence business can help with that nevertheless.

IMS support

Kamailio has sophisticated IMS support, led by NG Voice GmbH and more especially Carsten Bock. Carsten has given many insightful presentations on using Kamailio with IMS and VoLTE over the years.

If you are interested in this topic, you should also take a look at OpenIMSCore.

There has been a great deal of interest in Kamailio from mobile operators and MVNOs.

SIP Outbound

For some years now, Kamailio has supported SIP Outbound (RFC 5626). Use of it in the wild remains very limited, but when you have Outbound-ready clients, you’ll have an Outbound-ready server, a long time in the making.

SIP over WebSocket / WebRTC support

Kamailio is an excellent candidate for a SIP WebRTC gateway, with its extensive WebSocket support and RTPEngine for ICE and DTLS-SRTP.

Bottom line

Kamailio has its limits, and there are absolutely cases where a mainstream commercial SBC would be an appropriate choice. For instance, due to its architecture, Kamailio cannot accommodate listening on a large number of network interfaces, so if you are in the business of forwarding signalling across a large amount of customer VLANs/VRFs, you may need an SBC.

More pointedly, Kamailio is a SIP proxy at heart. No amount of clever topology hiding modules can change the very real sense in which the UAs on both sides of Kamailio are interoperating with each other rather than with Kamailio per se. We are a far cry from the mid-2000s, and most mainstream SIP endpoints are shockingly interoperable these days. Nevertheless, if SIP interoperability is your particular woe for any number of niche reasons, remember the rule of “garbage in, garbage out”. At the very least, you would need to pair Kamailio with a B2BUA that can be generous in what it accepts and conservative in what it emits.

Those of you who work with WebRTC know that it presents a rich and fertile field for interoperability woes as well. We tend to take the view that this is part of a larger conversation about whether WebRTC is ready for prime time.

The other key point about Kamailio is that it’s configured in an imperative manner, driven by a programmatic route script. In this respect, it’s not like a finished product with a static feature set that simply need to be enabled or disabled via declarative configuration files. I often compare Kamailio to an SDK with a SIP proxy core, and I think that comparison is meritorious. You will need some software engineering expertise to use Kamailio in a non-trivial way. You must meticulously script your SIP outcomes at a rather low level.

Nevertheless, we come full-circle to the idea that Kamailio presents a compelling, full-featured and viable SBC alternative, if you insist on that terminology. You’ll have to approach the question in a technically SIP-savvy and open-minded way. Moreover, while Kamailio has always catered well to the needs of large-scale service providers, it has evolved even more technical capabilities in recent years to facilitate that.

Service provider work is the only kind of work we do, and the reason we’re in business is because quite a lot of them are quietly subverting conventional SBC wisdom and thinking outside the traditional SBC. Don’t take my word for it; their numbers speak for themselves.

Server-side NAT traversal with Kamailio: the definitive guide

If you are a retail-type SIP service provider — that is, you sell SIP service to SMB end-users rather than wholesale customers or enterprises — and your product does not include bundled private line or VPN connectivity to your customers, the vast majority of your customer endpoints will be NAT’d.

If you’re using Kamailio as a customer-facing “SBC lite” to front-end your service delivery platform, this article is for you.

There’s a lot of confusion around best practices for NAT handling with Kamailio, given (1) that there are multiple approaches to handling NAT in the industry and also (2) that Kamailio idioms and conventions for this have evolved over time. I hope this article helps to address these issues comprehensively and puts lingering questions to rest.

Before we delve into that, let’s lay down some important background…

Why is NAT such a problem with SIP?

There are a few reasons:

First — for VoIP telephony purposes, at least — SIP primarily provides a channel in which to have a conversation about the establishment of RTP flows on dynamically allocated ports. This puts it in league with other protocols such as FTP, which also do not multiplex data and “metadata” over the same connection, and instead create ephemeral connections on unpredictable dynamic ports. This is different to eminently “NATtable” protocols like HTTP, where all data is simply sent back down the same client-initiated connection.

Second, VoIP by nature requires persistent state and reachability. Clients not only have to make outbound calls, but also to receive inbound calls, possibly after the “connection” (construed broadly) has been inactive for quite some time. This differs from a more or less ephemeral interaction like that of HTTP (though this claim ignores complications of the modern web such as long polling and WebSockets).

Third, most SIP in the world still runs over UDP, which, in its character as a “connection-less” transport that provides “fire and forget” datagram delivery, is less “NATtable” than TCP. Although UDP is connection-less, NAT routers must identify and associate UDP messages into stateful “flows” as part and parcel of the “connection tracking” that makes NAT work. However, on average, their memory for idle UDP flows is shorter and less reliable than for TCP connections — in some cases, egregiously worse, no more than a minute or so. That calls for more vigourous keepalive methods. Combined with the grim reality of increasing message size and the resulting UDP fragmentation, it’s also an excellent argument for using TCP at the customer access edge of your SIP network, but to be sure, that’s a decision that comes with its own trade-offs, and in any case TCP is not a panacea for all SIP NAT problems.

Finally: despite the nominally “logical” character of SIP URIs, SIP endpoints have come to put network and transport-layer reachability information (read: IP addresses and ports) directly into SIP messaging. No clean and universal logical-to-NLRI translation layer exists, such as DNS or ARP. A SIP endpoint literally tells the other end what IP address and port to reach it on, and default endpoint behaviour on the other side is to follow that literally. That’s a problem if that SIP endpoint’s awareness is limited to its own network interfaces (more on that in the next section).

SIP wasn’t designed for NAT. Search RFC 3261 for the word “NAT”; you’ll find nothing, because it presumes end-to-end reachability that today’s IPv4 Internet does not provide.

Client vs. Server-side NAT traversal and ALGs

Broadly speaking, there are two philosophies on NAT traversal: client-side NAT traversal and server-side NAT traversal.

Client-side NAT traversal takes the view that clients are responsible for identifying their WAN NLRI themselves and making correct outward representations about it. This is the view taken by the WebRTC and ICE scene. This is also the central idea of STUN and some firewalls’ SIP ALGs (Application Layer Gateways).

Server-side NAT traversal takes the opposite view; the client needs to know nothing, and it’s up to the SIP server to discover the client’s WAN addressing characteristics and how to reach it. In broad terms, this means the server must tendentiously disbelieve the addresses and ports that appear in the NAT’d endpoint’s SIP packets, encapsulated SDP body, etc., and must instead look to the source address and port of the packets as they actually arrive.

Server-side NAT traversal is the vantage point of major SBC vendors, and is also the most universal solution because it does not require any special accommodation by the client. Server-side is what this article is all about.

One last note on the dichotomy: client-side and server-side approaches don’t play well together much of the time. Most server-side implementations detect NAT’d clients by identifying disparities between the addresses/ports represented in SIP packets and the actual source IP and port, and take appropriate countermeasures. While it is theoretically unproblematic to give an “effectively public” (that is to say, non-NAT’d) endpoint NAT treatment anyway, this is only true if every part of the client message containing addressing is appropriately mangled at every step.

ALGs (Application Layer Gateways), a type of client-side traversal solution embedded in the NAT router itself, are especially notorious for foiling this by substituting in correct public IP/port information. However, in my experience, and that of our service provider customers, they only correct some parts of the SIP message and not others (e.g. they will fix the Via but not the Contact address, and perhaps not touch the SDP at all, and even if they do, they don’t open the right RTP ports). This way lies madness, and that’s why we hate ALGs so much, but the same caveats can sometimes apply to STUN-based approaches.

“Great, the ALG fixed all the problems!” said noone, ever. Not that I know of, anyway. Some NAT gateways allow one to disable the SIP ALG, and if you are using a server-side NAT traversal approach, you should do this. However, other consumer-grade and SMB NAT gateways do not allow you to do this, and dealing with them can be nigh impossible. The best solution is to replace the NAT gateway with a better one. If that’s not possible, sometimes they can be bypassed by using a non-standard SIP port (not 5060) on either the client or the server side, or both. However, some of them actually fingerprint the message as SIP based on its content, regardless of source or destination port. They’re pretty much intractable.

In short, if you’re going to do server-side NAT traversal, make every effort to turn off any client-side NAT traversal measures, including STUN and ALGs. The “stupidity” of the client about its wider area networking is not a bug in this scenario, but a feature.

NAT and RTP

A server-side NAT traversal strategy typically requires solutions for RTP, not just SIP.

Even if you get SIP back to the right place across a NAT’d connection, that doesn’t solve two-way media. The NAT’d endpoint will send media from the port declared in its SDP stanza (assuming symmetric RTP, which is pretty much universal), but this will be remapped to a different source port by the NAT gateway.

This requires a more intelligent form of media handling, commonly referred to as “RTP latching” and by various other terms. This is where the RTP counterparty listens for at least one RTP frame arriving at the destination port it advertised, and harvests the source IP and port from that packet and uses that for the return RTP path.

If you have a publicly reachable RTP endpoint on the other side of Kamailio which can behave that way, such as Asterisk (with the nat=yes option, or whatever it is now), you don’t need an intermediate RTP relay. However, not all endpoints will do that. For example, if you are in the “minutes” business and have wholesale carriers behind Kamailio, their gateways will most likely not be configured for this behaviour, more as a matter of policy than technology.

There are other scenarios where intermediate RTP relay may not be necessary. For example, if you are providing SIP trunking to NAT’d PBXs, rather than hosted PBX to phones (Class 4 rather than Class 5 service, in the parlance of the North American Bell system), you may be able to get away with DNAT-forwarding a range of RTP ports on the NAT gateway into a single LAN endpoint. This works because the LAN destination is single and static. A number of our customers use this strategy to great effect. Another reason you may need an intermediate RTP relay is simply to bridge topology; if your ultimate media destinations as on a private network, as for example in my network diagram below, you’ll need to forward RTP between them.

These are important issues to consider because if your entire customer base is NAT’d, being in the RTP path will greatly change the hardware and bandwidth economics of your business. Nevertheless, assuming you’ve determined that you do need to handle RTP for your customers, convention has settled around Sipwise’s RTPEngine. RTPEngine is an extremely versatile RTP relay which performs forwarding in kernel space, achieving close to wire speed. Installation and setup of RTPEngine is outside the scope of this tutorial, but the documentation on the linked GitHub page is sufficient.

As with all other RTP relays supported by Kamailio, RTPEngine is an external process controlled by Kamailio via a UDP control socket. When Kamailio receives an SDP offer or answer, it forwards it to RTPEngine via the rtpengine control module, and RTPEngine opens a pair of RTP/RTCP ports to receive traffic from the given endpoint. The same happens in the other direction, upon handling the SDP offer/answer of the other party. These new endpoints are then substituted into the SDP prior to relay, with the result that RTPEngine is now involved in both streams.

What is to be done?

To provide server-side NAT traversal, then, the following things must be done within the overall logic of Kamailio route script.

  1. Ensure that transactional replies return to real source port – When an endpoint sends a request to your SIP server, normal behaviour is to forward replied to the transport protocol, address and port indicated in the topmost Via header of the request. In a NAT’d setting, this needs to be ignored and the reply must instead be returned to the real outside source address and port of the request. This is provided for by the rport parameter, as elaborated upon in RFC 3581. The trouble is, not all NAT’d endpoints include the ;rport parameter in their Via. Fortunately, there is a core Kamailio function, force_rport(), which tells Kamailio to treat the request as if ;rport were present.
  2. Stay in the messaging path for the entire dialog life cycle – If Kamailio is providing far-end NAT traversal functionality for a call, it must continue to do so for the entire life cycle of the call, not just the initial INVITE transaction. To tell the endpoints to shunt their in-dialog requests through Kamailio, a Record-Route header must be added; this is accomplished by calling record_route() (rr module) for initial INVITE requests.
  3. Fix Contact URI to be NAT-safe – This applies to requests and replies alike, and applied to INVITE and REGISTER transactions alike. This will be discussed further below.
  4. Engage RTPEngine – (if necessary)

It’s really as simple as that.

We will discuss how to achieve these things below, but first…

Testing topology

For purposes of example in this article, I will be using my home Polycom VVX 411, on LAN subnet 172.30.105.0/24>. It talks to a Kamailio server, 70.1.2.1, which also acts as a registrar, and front-ends an elastic group of media servers which are located on a private subnet, 192.168.2.0/24. This also means that the Kamailio server bridges SIP (and as we shall see, RTP, by way of RTPEngine) between two different network interfaces. This is perhaps more complex than the topology needs to be by way of example, but also illuminates a fuller range of possibilities.

A diagram may help:

nat_traversal_topology

The nathelper module

The nathelper module is Kamailio’s one-stop stop for NAT traversal-related functionality. Its parameters and functions encapsulate three main functional areas:

  • Manipulation of SIP message attributes to add outside-network awareness;
  • Detection of NAT’d endpoints;
  • Keepalive pinging of NAT’d endpoints.

There is a subtle link between this module and the registrar module, in that the received_avp parameter is shared among them—if you choose to take that approach to dealing with registrations.

The nat_uac_test() function performs a user-defined combination of tests to decide if an endpoint is NAT’d. The argument is a bitmask; if you’re not familiar with the concept from software engineering, it means that a combination of flags can be specified by adding them together. For example, to apply both flag 1 and flag 2, use an argument of “3”.

Here is a REGISTER request from my NAT’d endpoint:

2018/05/07 06:53:26.402531 47.39.154.156:5060 -> 192.168.2.220:5060
REGISTER sip:sip.evaristesys.com SIP/2.0
Via: SIP/2.0/UDP 172.30.105.251:5060;branch=z9hG4bKffe427d2756F1643
From: "alex-balashov" <sip:alex-balashov@sip.evaristesys.com>;tag=B84E1216-803F7CD7
To: <sip:alex-balashov@sip.evaristesys.com>
CSeq: 3561 REGISTER
Call-ID: 4ae7899d1cc396640e440df7c72662d3
Contact: <sip:alex-balashov@172.30.105.251:5060>;methods="INVITE, ACK, BYE, CANCEL, OPTIONS, INFO, MESSAGE, SUBSCRIBE, NOTIFY, PRACK, UP
TE, REFER"
User-Agent: PolycomVVX-VVX_411-UA/5.6.0.17325
Accept-Language: en
Authorization: [omitted]
Max-Forwards: 70
Expires: 300
Content-Length: 0

The Via header specifies where responses to this transaction should be sent. It can be clearly seen that although the Via header contains a private IP of 172.30.105.251:5060, the actual source of the request is 47.39.154.156:5060 (and, it should be noted, the fact that the internal port 5060 maps to an external port of 5060 is merely a coincidence from how this particular NAT gateway works; it is more typical for it to be mapped to an arbitrary and different external port). Therefore, in this case, test flags 2 and 16 to nat_uac_test() would detect this anomaly.

There is some debate as to whether the various tests for RFC 1918/RFC 6598 (private) addresses have merit. It’s tempting to think that one can reveal NAT straightforwardly by checking for private addresses, e.g. 192.168.0.0/16, 172.16.0.0/12, 10.0.0.0/8, in the Via or Contact headers. However, to return to the network diagram above, Kamailio is multihomed on a private as well as a public network. Although symmetric SIP signalling can be taken for granted from almost any SIP endpoint nowadays, it is nevertheless poor form to give NAT treatment to an endpoint that is directly routable. Give some thought to whether the central theme of your NAT detection approach should be in looking for private addresses, or looking for discrepancies between the represented address/port and the actual source address/port. I personally favour the latter approach.

The “old books” of nathelper vs. the new

Traditional OpenSER-era and early Kamailio folklore prescribes the use of fix_nated_contact() and fix_nated_register() functions. One can still find these in a lot of books and documentation:

fix_nated_contact() rewrites the domain portion of the Contact URI to contain the source IP and port of the request or reply.

fix_nated_register() is intended for REGISTER requests, so is only relevant if you are using Kamailio as a registrar or forwarding registrations onward (i.e. using Path). It takes a more delicate approach, storing the real source IP and port in the received_avp, where it can be retrieved by registrar lookups and set as the destination set, Kamailio’s term for the next-hop forwarding destination (overriding request URI domain and port).

fix_nated_register() is generally unproblematic, though it does require a shared AVP with the registrar module. From a semantic point of view, however, fix_nated_contact() is deeply problematic, in that it modifies the Contact URI and therefore causes the construction of a Request URI, in requests incoming to the NAT’d client, which are not equivalent to the Contact URI populated there by the client. RFC 3261 says thou shalt not do that.

The nathelper offers better idioms for dealing with this mangling nowadays: handle_ruri_alias() and set_contact_alias()/add_contact_alias. Using these functions, this:

Contact: <sip:alex-balashov@172.30.105.251:5060>

is turned into:

Contact: <sip:alex-balashov@172.30.105.251:5060;alias=47.39.154.156~5060~1>

and stored (if REGISTER) or forwarded (anything else). When handle_ruri_alias() is called, the ;alias parameter is stripped off, and its contents populated into the destination URI. The beautiful thing about handle_ruri_alias() is that if the ;alias parameter is not present, it silently returns without any errors. This simplifies the code by removing the need for explicit checking for this parameter.

For the sake of simplicity and minimum intrusiveness, I strongly recommend using these functions in place of the old fix_*() functions.

Implementation

Near the top of the main request_route, you’ll probably want to have a global subroutine that checks for NAT. At this point, the logic will not be specialised based on the request method or whether the request contains an encapsulated SDP body. Critically, ensure that this happens prior to any authentication/AAA checks, as 401/407 challenges, along with all other replies, need to be routed to the correct place based on force_rport():

   
   if(nat_uac_test("18")) {
      force_rport();

      if(is_method("INVITE|REGISTER|SUBSCRIBE"))
         set_contact_alias();
   }

Later, in the loose_route() section that deals with handling re-invites and other in-dialog requests, you’ll need to engage RTPEngine and handle any present ;alias in the Request URI:

   if(has_totag()) {
      if(loose_route()) {
         if(is_method("INVITE|UPDATE") && sdp_content() && nat_uac_test("18"))
             rtpengine_manage("replace-origin replace-session-connection ICE=remove");

         ...

         handle_ruri_alias();

         t_on_reply("MAIN_REPLY");

         if(!t_relay())
            sl_reply_error();

         exit;
      }
   }

Initial INVITE handling is similar:

request_route {
   ...

   if(has_totag()) {
      ...
   }

   ...

   t_check_trans();

   if(is_method("INVITE")) {
      if(nat_uac_test("18") && sdp_content()) 
         rtpengine_manage("replace-origin replace-session-connection ICE=remove");

      t_on_reply("MAIN_REPLY");

      if(!t_relay())
         sl_reply_error();

      exit;
   }

To accommodate the case that requests are inbound to the NAT’d endpoint or the case that NAT’d endpoints are calling each other directly, an onreply_route will need to be armed for any transaction involving a NAT’d party. Its logic should be similar:

onreply_route[MAIN_REPLY] {
   if(nat_uac_test("18")) {
      force_rport();
      set_contact_alias();

      if(sdp_content()) 
         rtpengine_manage("replace-origin replace-session-connection ICE=remove");
    }

}

For serial forking across to multiple potential gateways, it is strongly recommended that you put initial invocations to RTPEngine into a branch_route(), so that RTPEngine can receive the most up-to-date branch data and potentially make branch-level decisions.

Registration requests are already handled by the general NAT detection stanza above. However, registration _lookups_ require an additional nuance:

route[REGISTRAR_LOOKUP] {
   ...

   if(!lookup("location")) {
      sl_send_reply("404", "Not Found");
      exit;
   }

   handle_ruri_alias();

   if(!t_relay())
      sl_reply_error();

   exit;
}

That’s really it!

What about NAT’d servers?

In cloud and VPS environments, it is getting quite common to have a private IP address natively homed on the host with an external public IP provided via 1-to-1 NAT.

Kamailio’s core listen directive has a parameter to assist with just this:

listen=udp:192.168.2.119:5060 advertise 70.1.2.1:5060

This will ensure that the Via and Record-Route headers reference the public IP address rather than the private one. It has no impact on RTP.

Topology bridging with RTPEngine + NAT

The discerning observer will note that the foregoing invocations of rtpengine_manage() did not address a key requirement of the network topology outlined in the diagram, the need to bridge two disparate network topologies.

This requires two different RTPEngine forwarding interfaces, one of which has a public IP via 1-to-1 NAT. The latter would seem to require something like an advertise directive, but for RTP. Fortunately, RTPEngine has such an option, applied with the ! delimiter:

OPTIONS="-i internal/192.168.2.220 -i external/192.168.2.119!70.1.2.1

The direction attribute to rtpengine_offer() (or, equivalently, the initial call to rtpengine_manage()) allows one to specify the ingress and egress interfaces respectively:

rtpengine_manage("replace-origin replace-session-connection ICE=remove direction=internal direction=external");

Subsequent calls to rtpengine_manage(), including calls in onreply_route, will appropriately take into account this state and reverse the interface order for the return stream as needed.

Keepalives and timeouts

The most common challenge with NAT’d SIP endpoints is that they need to remain reachable in a persistent way; they can receive inbound calls or other messages at any moment in the future.

Recall that NAT gateways add mappings for connections or connection-like flows (in the case of UDP, for remember that for NAT purposes UDP isn’t truly “connection-less”) that they detect, e.g. from 192.168.0.102:5060 to $WAN_IP:43928. For the time that the latter mapping exists, any UDP sent to $WAN_IP:43928 will be returned to 192.168.0.102:5060.

The problem is that this mapping is removed after relatively short periods of inactivity. In principle this is a good thing; you wouldn’t want your NAT gateway’s memory filled up with ephemeral “connections” that have long ceased to be relevant. However, while, in our experience, most timeouts for UDP flows are in the range of a few minutes, there are some routers whose “memory” for UDP flows can be exceptionally poor — one minute or less. The same thing holds true for TCP, but UDP tends to be affected more egregiously.

When the connection tracking “mapping” goes away, the NAT gateway drops incoming packets to the old $WAN_IP:43928 destination on the floor. Consider this example:

screenshot-2018-05-08-08:37:42

In this test topology, 10.150.21.6 is a Freeswitch PBX on a private network (10.150.21.0/24) that receives registrations relayed from Kamailio (with help from the Path header). Kamailio is multi-homed on a private (10.150.20.2) and public (209.51.167.66) interface, the latter of which is presented to outside phones.

A registration which occurred about 15 minutes prior had established a contact binding of 47.39.154.156:5060 for my AOR (Address of Record). However, as no activity had occurred in this flow for as long, the NAT router “forgot” about it, and you can that efforts to reach the phone go nowhere. An ICMP type 3 (port unreachable) message (not shown) is sent back to Kamailio and that’s the end of it.

So, to keep NAT “pinholes” — as they’re often called — open, some means of generating frequent activity on the mapped flow is required.

The easiest and most low-hanging solution is to lower the re-registration interval of every NAT’d device to something like 60 or 120 seconds; this will generate a bidirectional message exchange (REGISTER, 401 challenge, 200 OK) which will “renew” the pinhole. This is effective in many cases. But there are two problems:

  1. Interval can’t be too low – Many devices or SIP registrars will not support a re-registration interval of less than 60 seconds, and believe it or not, that’s not low enough for some of the most egregious violators among the NAT gateways out there.
  2. Performance issues for the service provider – In a sympathetic moment, consider things from your SIP service provider’s perspective: tens of thousands (or more) of devices are banging on an SBC or an edge proxy — and with registrations no less, which are rather expensive operations that typically have some kind of database involvement for both authentication and persistent storage. That can greatly change the operational economics. So, as a matter of policy, allowing or encouraging such low re-registration intervals may not be desirable.

Enter the “keepalive”, a message sent by either server or client that garners some kind of response from the other party. Keepalives are an improvement over registrations in that they are not resource-intensive, since they invite only a superficial response from a SIP stack.

There are two types of keepalives commonly used in the SIP world: (1) a basic CRLF (carriage return line feed) message, short and sweet, and (2) a SIP OPTIONS request. While OPTIONS ostensibly has a different formal purpose, to query a SIP party for its capabilities, it’s frequently employed as a keepalive or dead peer detection (DPD) message.

Many end-user devices can send these keepalives, and if your end-user device environment is sufficiently homogenous and you exert high provisioning control over it, you may wish to do configure it that way and simply have Kamailio respond to them. In the case OPTIONS pings, you will want to configure Kamailio to respond to them with an affirmative 200 OK:

    if(is_method("OPTIONS")) {
        options_reply();
        exit;
    }

That goes in the initial request-handling section, toward the bottom of the main request route.

Pro-tip: Most end-user devices will send an OPTIONS message with a Request URI that has a user part, i.e.

OPTIONS sip:test@server.ip:5060 SIP/2.0

There is a valid debate to be had as to whether this is appropriate, since, strictly speaking, it implies that the OPTIONS message is destined for a particular “resource” (e.g. Address of Record / other user) on that server, rather than the server itself. Nevertheless, this is how a lot of OPTIONS messages are constructed. The Kamailio siputils module, which provides the options_reply() function, takes a fundamentalist interpretation in this debate, which will impair many replies.

Slightly unorthodox, but effective workaround, since keepalive applications of the OPTIONS message seldom care about the actual content of the response:

    if(is_method("OPTIONS")) {
       sl_send_reply("200", "OK");
       exit;
    }

You may find more profit in server-initiated keepalive pinging, however. The Kamailio nathelper module provides extensive options for that as well. Start with the NAT pinging section.

UDP fragmentation

The tendency over time is for the median size of SIP messages to creep up: SDP stanzas get bigger as more codecs are on offer, new SIP headers and attributes enter into use, etc.

When the payload size of a UDP message gets to within a small margin of the MTU (typically 1500 bytes), it gets fragmented. UDP does not provide transport-level reassembly as TCP does. Because only the first fragment will contain the UDP header, it takes considerable cleverness to reassemble the message. Kamailio’s SIP stack can, of course, do this, as can many others in the mainstream FOSS world. However, many user agents cannot.

More damningly, there’s virtually a zero-percent chance that a NAT gateway will handle UDP fragmentation correctly. So, as a rule of thumb, it is eminently safe to assume that a NAT’d endpoint will not receive a fragmented SIP message.

Strategies for dealing with this phenomenon are detailed in a separate post all about UDP fragmentation on this blog, but the short answer is: use TCP. It’s what RFC 3261 says to do.

What about SIP Outbound?

RFC 5626, known as “SIP Outbound”, is the latest opus of the IETF’s copious intellectual output on these topics. As is true of many such complicated ventures, Kamailio has supported it for a long time but most SIP UAs in the wild seldom do.

In brief, SIP Outbound proposes the establishment of multiple concurrent connection flows by the client for redundancy. A basic tenet of this arrangement is that all responsibility for establishment of connections through NAT, as well as all maintenance and upkeep of the same, is the responsibility of the client. There are a lot of other details involved, mainly to do with the registrar only using one of the “flows” at a time to reach a client with multiple registrations, so that multiple registrations established for redundancy do not lead to multiple forked INVITEs to the client. Some new parameters are involved in this new layer of bureaucracy for the registrar: instance-id and reg-id.

A full exposition of how it all works is certainly beyond the scope of this article, but RFC 5626 is captivating bedtime reading. However, until and unless widespread UA support for it appears, this author cannot be moved to say, “Use SIP Outbound, it’ll solve your NAT traversal problems!”

Tuning Kamailio for high throughput and performance

Unrivaled SIP message processing throughput is one of the central claims to fame made by Kamailio. When it comes to call setups per second (“CPS”) or SIP messages per second, there’s nothing faster than the OpenSER technology stack. Naturally, we pass that benefit on in the value proposition of CSRP, our Kamailio-based “Class 4” routing, rating and accounting platform.

It’s an important differentiator from many traditional softswitches, SBCs and B2BUAs, some of which are known to fall over with as little as 100 CPS of traffic, and many others to top out at a few hundred CPS spread over the entire installation. It shapes the horizontal scalability and port density of those platforms, and, accordingly, the unit economics for actors in the business: per-port licencing costs, server dimensioning, and ultimately, gross margins in a world where PSTN termination costs are spiraling down rapidly and short-duration traffic–love it, hate it–plays a conspicuous role in the ITSP industry prospectus.

That’s why it’s worthwhile to take the time to understand how Kamailio does what it does, and what that means for you as an implementor (or a prospective CSRP customer? :-).

Kamailio concurrency architecture

Kamailio does not use threads as such. Instead, when it boots, it fork()s some child processes that are specialised into SIP packet receiver roles. These are bona fide independent processes, and although they may be colloquially referred to as “threads”, they’re not POSIX threads, and, critically, don’t use POSIX threads’ locking and synchronisation mechanisms. Kamailio child processes communicate amongst themselves (interprocess communication, or “IPC”) using System V shared memory. We’re going to call these “receiver processes” for the remainder of the article, since that’s what Kamailio itself calls them.

The number of receiver processes to spawn is governed by the children= core configuration directive. This value is multiplied by the number of listening interfaces and transports. For example, in the output below, I have my children set to 8, but because I am listening on two network interfaces (209.51.167.66 and 10.150.20.2), there are eight processes for each interface. If I enabled SIP over TCP as well as UDP, the number would be 32. But a more typical installation would simply have 8:

[root@allegro-1 ~]# kamctl ps
Process:: ID=0 PID=22937 Type=attendant
Process:: ID=1 PID=22938 Type=udp receiver child=0 sock=209.51.167.66:5060
Process:: ID=2 PID=22939 Type=udp receiver child=1 sock=209.51.167.66:5060
Process:: ID=3 PID=22940 Type=udp receiver child=2 sock=209.51.167.66:5060
Process:: ID=4 PID=22941 Type=udp receiver child=3 sock=209.51.167.66:5060
Process:: ID=5 PID=22942 Type=udp receiver child=4 sock=209.51.167.66:5060
Process:: ID=6 PID=22943 Type=udp receiver child=5 sock=209.51.167.66:5060
Process:: ID=7 PID=22944 Type=udp receiver child=6 sock=209.51.167.66:5060
Process:: ID=8 PID=22945 Type=udp receiver child=7 sock=209.51.167.66:5060
Process:: ID=9 PID=22946 Type=udp receiver child=0 sock=10.150.20.2:5060
Process:: ID=10 PID=22947 Type=udp receiver child=1 sock=10.150.20.2:5060
Process:: ID=11 PID=22948 Type=udp receiver child=2 sock=10.150.20.2:5060
Process:: ID=12 PID=22949 Type=udp receiver child=3 sock=10.150.20.2:5060
Process:: ID=13 PID=22950 Type=udp receiver child=4 sock=10.150.20.2:5060
Process:: ID=14 PID=22951 Type=udp receiver child=5 sock=10.150.20.2:5060
Process:: ID=15 PID=22952 Type=udp receiver child=6 sock=10.150.20.2:5060
Process:: ID=16 PID=22953 Type=udp receiver child=7 sock=10.150.20.2:5060
Process:: ID=17 PID=22954 Type=slow timer
Process:: ID=18 PID=22955 Type=timer
Process:: ID=19 PID=22956 Type=MI FIFO

(There are some other child processes besides receivers, but these are ancillary — they do not perform Kamailio’s core function of SIP message processing. More about other processes later.)

You can think of these receiver processes as something like “traffic lanes” for SIP packets; as many “lanes” as there are, that’s how many SIP messages can be crammed onto the “highway” at the same time:

kamailio_sip_worker_processes

This is more or less the standard static “thread pool” design. For low-latency, high-volume workloads, it’s probably the fastest available option. Because the size of the worker pool does not change, the overhead of starting and stopping threads constantly is avoided. What applies to static thread pool management in general also applies here.

Of course, synchronisation, the mutual exclusion locks (“mutexes”) which ensure that multiple threads do not access and modify the same data at the same time in conflicting ways, is the bane of multiprocess programming, whatever form the processes take. The parallelism benefit of multiple threads is undermined when they all spend a lot of time blocking, waiting on mutex locks held by other threads to open before their execution can continue. Think of a multi-lane road where every car is constantly changing lanes; there’s a lot of waiting, acknowledgment and coordination that has to happen, inevitably leading to a slow-down or jam. The ideal design is “shared-nothing”, where every car stays stays in its own lane always–that is, where every thread can operate more or less self-sufficiently without complicated sharing (and therefore, locking) with other threads.

The design of Kamailio is what you might call “share as little as possible”; while certain data structures and other constructs (AVPsXAVPs, SIP transactions, dialog statehtable, etc.) are unavoidably global (otherwise they wouldn’t be very useful), residing in the shared memory IPC space accessed by all receiver threads, much of what every receiver process requires to operate on a SIP message is proprietary to that process. For instance, every child process receives its own connection handle to databases and key-value stores (e.g. MySQL, Redis), removing the need for common (and contended) connection pooling. In addition to the shared memory pool used by all processes, every child process gets a small “scratch area” of memory where ephemeral, short-term data (such as $var(…) config variables) as well as persistent process-proprietary data lives. (This is called “package memory” in Kamailio, and is set with the -M command line argument upon invocation, as opposed to -m, which sets the size of the shared memory pool.)

Of course, actual results will depend on which Kamailio features you utilise, and how much you utilise them. Nearly all useful applications of Kamailio involve transaction statefulness, so you can expect, at a minimum, for transactions to be shared. If, for example, your processing is database-driven, you can expect receiver processes to operate more independently than if your processing is heavily tied up in shared memory constructs like htable or pipelimit.

Furthermore, in contrast to the architecture found in many classically multithreaded programs with this “thread pool” design, there is no “distributor” thread that apportions incoming packets. Instead, every child process calls recvfrom() (or accept() or whatever) on the same socket address. The operating system kernel itself distributes the incoming packets to listening child processes in a semi-random fashion that, in statistically large quantities, is substantially similar to “round-robin”. It’s a simple, straightforward approach that leverages the kernel’s own packet queueing and eliminates the complexity of a supervisory process to marshal data.

How many children to have?

Naturally, discussions of performance and throughput all sooner or later turn to:

What “children” value should I use for best performance?

It’s a hotly debated topic, and probably one of the more common FAQs on the Kamailio users’ mailing list. The stock configuration ships with a value of 8, which leads many people to ask: why so low? At first glance, it might stand to reason that on a busy system, the more child processes, the better. However, that’s not accurate.

The reason the answer is complicated is because it depends on your hardware and, more importantly, on Kamailio’s workload.

Before we go forward, let’s define a term: “available hardware threads”. For our purposes, this is the number of CPU “appearances” in /proc/cpuinfo. This takes into account the “logical” cores created by hyper-threading.

For instance, I have a dual-core laptop with four logical “CPUs”:

$ nproc
4

sasha@saurus:~$ cat /proc/cpuinfo | grep 'core id' | sort -u
core id		: 0
core id		: 1

In this case, our number of available hardware threads is 4.

In principle, the number of child processes that can usefully execute in parallel is equal to the number of available hardware threads (in the /proc/cpuinfo sense). Given a purely static Kamailio configuration on an 8-HW thread system, 8 receiver processes will have 8 different “CPU” affinities and peg out the processors with as many packets as the hardware can usefully handle. Such a configuration can handle tens of thousands of messages per second, and the limits you will eventually run into are more likely to do with userspace I/O contention or NIC frames-per-interrupt or hardware buffer type issues than with Kamailio itself.

Once you increase the number of receiver processes beyond that, the surplus processes will be fighting over the same number of hardware threads, and you’ll be more harmed by the downside of that userspace scheduling contention and the limited amount of shared memory locking that does exist in Kamailio than you’ll benefit from the upside of more processes.

However, most useful applications of Kamailio don’t involve a hard-coded config file, but rather external I/O interactions with outside systems: databases, key-value stores, web services, embedded programs, and the like. Waiting on an outside I/O call, such as a SQL query to MySQL, to return is a synchronous (or blocking) process; while the receiver thread waits for the database to respond, it sits there doing nothing. It’s tied up and cannot process any more SIP messages. It’s safe to say that the end-to-end processing latency for any given SIP message is determined by the cumulative I/O wait involved in the processing. Such operations are referred to as I/O-bound operations. Most of what a typical Kamailio deployment does is somehow I/O-bound.

This is where you have to take a discount from the aforementioned idealised maximum throughput of Kamailio, and it’s usually a rather steep one. The question is rarely: “How many SIP messages per second can Kamailio handle?” The right question is: “How many SIP messages per second can Kamailio handle with your configuration script and external I/O dependencies?” It stands to reason that given a fixed-size receiver thread pool, one should aim to keep external I/O wait to a minimum.

Still, when a receiver process spends a lot of time waiting on external I/O, it’s just sleeping until notified by the kernel that new data has arrived on its socket descriptor or what have you. That sleeping creates an opening for additional processes to do useful work in that time. If you have a lot of external I/O wait, it’s safe to increase the number of receiver threads to values like 32 or 64. If most of what your worker processes do is wait on a morbidly obese Java servlet on another server, you can afford to have more of them waiting.

This is why a typical Linux system has hundreds of background processes running, even though there are only 2, 4 or 8 hardware threads available. Most of the time, those processes aren’t doing anything. They’re just sitting around waiting for external stimuli of some sort. If they were all pegging out the CPU, you’d have a problem.

How many receiver processes can there be? All other things being equal, the answer is “not too many”. They’re not designed to be run in the hundreds or thousands. Each child process is fairly heavyweight, carrying, at a minimum, an allocation of a few megabytes of package memory, and toting its own connection handle to services such as databases, RTP proxies, etc. Since Kamailio child processes do have to share quite a few things, there’s shared memory mutexes over those data structures. I don’t have any numbers, but the fast mutex design is clearly not intended to support a very large number of processes. I suppose it’s a testament to CSRP’s relatively efficient call processing loop that, despite being very database-bound, we’ve found in our own testing signs of diminishing returns after increasing children much beyond the number of available hardware threads.

Academically speaking, the easiest way to know that you need more child processes is to monitor the kernel’s packet receive queue using netstat (or ss, since netstat is deprecated in RHEL >= 7, in keeping with general developments in systemd land):

[root@allegro-1 ~]# ss -4 -n -l | grep 5060
udp    UNCONN     0      0      10.150.20.2:5060                  *:*
udp    UNCONN     0      0      209.51.167.66:5060                  *:*

The third column is the RecvQ column. Under normal conditions, its value should be 0, perhaps ephemerally spiking to a few hundred or thousand entries here and there. If the receive queue size is continuously > 0, bursting stubbornly high, or, worst of all, increasing monotonically, this tells you that incoming SIP messages are not being consumed by the receiver processes fast enough. These receive queues can be tuned to some extent, but that ultimately won’t solve your problem. You just need more processes to suckle on the packet teat.

More fine-grained results can be obtained with sipp scenario testing. Run calls through your Kamailio proxy and ramp the call setup rate up until the UAC starts reporting retransmissions. This gives insight into a different dimension of the problem than the packet queue: is your proxy taking too long to respond? In both cases, however, the available options are either to decrease the I/O wait to free up the receiver processes to process more messages, or to add more receiver processes.

However, once you go down the road of adding receiver processes, you need to ask yourself: are these processes doing a lot of waiting, or are they always busy? If your request/message processing in configuration script has relatively little end-to-end I/O delay, all you’re going to do is overbook your CPU, driving up your load average and slowing down the system as a whole. In that case, you’ve simply run into the limits of your hardware or your software architecture. There’s no simple fix for that.

That’s why, when the question about the ideal number of receiver processes is asked, the answers given are often avoidant and noncommittal.

What about asynchronous processing?

Over the last few years, Kamailio has evolved a lot of asynchronous processing features. Daniel-Constantin Mierla has some useful examples and information from Kamailio World 2014.

The basic idea behind asynchronous processing, in Kamailio terms is that, in addition to the core receiver processes, an additional pool of processes is spawned to which latent blocking operations can be delegated. Transactions are suspended and enqueued to these outside processes, and they’ll get to them … whenever they can get to them–that’s the “asynchronous” part. This keeps the main SIP receiver processes free to process further messages instead of being blocked with expensive I/O operations, as the heavy lifting is left up to the dedicated async task worker processes.

Asynchronous processing can be very useful in certain situations. If you know your request routing is going to be expensive, you can send back an immediate, stateless 100 Trying and push the processing tasks out to the async task workers.

However, a note of caution. Asynchronous processing of all kinds is often held to be a panacea, driven by popular Silicon Valley fashions and async-everything design patterns in the world of Node.js. There’s a lot of cargo cult thinking and exuberance around asynchronous processing.

As a guiding principle, remember that asynchrony is not magic, and it cannot achieve that which is otherwise thermodynamically impossible given the number of transistors on your chip. In many cases, asynchronous programming is almost a kind of syntactical sugar, a different semantic vantage point on the same operations which ultimately have to be executed in some way on the same system given the same resources. The fact that the responsibility for I/O multiplexing is pushed to an external, opaque actor doesn’t change that.

Asynchronous processing also imposes its own overheads: in the case of Kamailio, there’s a complexity to suspending a TM transaction and reanimating it in a different thread that should be weighed. (I cannot say how much complexity, and, as with everything else in this article, have made no effort to measure it or describe it with the rigour of the scientific method. But it’s there.)

In the commonplace case of database-driven workloads, asynchronous processing does little more than push the pain point to a different place. To drive the point home, let’s take an example from our very own CSRP product:

In CSRP, we write CDR events to our PostgreSQL database in an asynchronous way, since these operations are quite expensive and can potentially set off a database trigger cascade for call rating, lengthening the transaction. We don’t really care if CDRs are written with a slight delay; it’s far more important that this accounting not block SIP receiver processes.

However, many CSRP customers choose to run their PostgreSQL database on the same host as the Kamailio proxy. If the database is busy writing CDRs and is pegging out the storage controller with write ops, it’s going to make everything less responsive, asynchronous or not, including the read-only queries required for call processing. Even if the database is situated on a different host, our call processing is highly database-dependent, so overwhelming the database has deleterious consequences regardless.

This can engender a nasty positive feedback loop:

  • Adding more asynchronous task workers won’t help; they’ll just further overwhelm storage with an additional firehose of CDR events.
  • The asynchronous task queue will stack up until calling SIP endpoints will start CANCELing calls due to high post-dial delay (PDD).
  • Adding more receive workers won’t help; if the system as a whole is experiencing high I/O wait, adding more workers to take on more SIP messages just means more queries and yet more load.

A fashionable design pattern can’t fix that; you just need more hardware, or a different approach (in terms of I/O, algorithms, storage demand, etc.) to call processing.

The point is: before shifting the load to another part of the system so as to get more traffic through the front door, consider the impact globally and holistically. Maybe you can get Kamailio to slurp up more packets, but that doesn’t mean you should. How well do your external inputs scale to the task?

Asynchronous tasks can be very handy for certain kinds of applications, most notably where some sort of activity needs to be time-delayed into the future (e.g. push notifications). We love our asynchronous CDR accounting, since it’s heavy, and yet there’s no need for that to be real-time or responsive by SIP standards. However, for maximising call throughput in an I/O-bound workload such as ours, in which storage and database demand is more or less a linear function of requests per second, it’s far less clear. Our own testing suggests that asynchronous processing yields marginal benefits at best, and that we might be better off keeping ourselves honest and putting our efforts into further lowering our processing latency in the normal, synchronous execution context.

Conclusion

  • There’s no straightforward, generic answer to the question of how to reap maximum throughput from Kamailio and/or how many receiver worker processes to use. It requires deep consideration of the nature of the workload and the execution environment, and, most likely, empirical testing — doubly so for bespoke and/or nonstandard applications.
  • A reasonable guideline for most generic and/or commonplace Kamailio workloads is to set the children equal to the number of available hardware threads. The prevalence of servers with quad-core processors + HyperThreading probably explains why the stock config ships with a setting of 8.
  • Asynchronous features are convenient and can, to an extent, be used to increase raw throughput, but rapidly encounter diminishing returns when the result is a drastic increase in base I/O load on either the local host or a dependency to which the workload is heavily I/O-bound.

Many thanks to my colleagues Fred Posner, Matt Jordan and Kevin Fleming for reading drafts of this article.

SIP UDP fragmentation and Kamailio – the SIP header diet

Failed calls due to fragmentation of large UDP SIP messages is a frequent support issue for us, as a provider of a SIP proxy-based call processing platform based on Kamailio.

Anecdotally, the problem seems to be getting more common, as SIP becomes more complex, offering more extensions and capabilities that are represented in some way in the messaging, and more voice codecs are on offer, such as Opus and G.722. Meanwhile, the capacity of many SIP user agents to reassemble fragmented UDP does not seem to have increased much over the last few years. Even where a SIP user agent can handle UDP reassembly, UDP fragments are dropped by braindead routers and NAT gateways. Setting the Don’t Fragment (DF) IP header bit doesn’t help. It’s safe to say that fragmented UDP messages can’t be handled reliably out there in the world.

sip_fragmentation

Kamailio itself handles fragmented incoming UDP messages just fine, but that doesn’t mean we can pass the full message to the destination, where it will be fragmented again and, most likely, disappear into the ether. As you might surmise, it poses a unique challenge for us since Kamailio is at the heart of CSRP’s call processing core, not a Back-to-Back User Agent (B2BUA), as might be the case in a more traditional SBC or softswitch. An inline B2BUA that endogenously originates a new logical B-leg can easily overcome this problem by accepting a wealth of SIP message bloat on the A-leg liberally while emitting headers conservatively on the B-leg.

SIP proxies such as Kamailio participate within one logical call leg, and are for the most part are bound by standards to pass along SIP messaging as received, without adulteration. This poses problems for interoperability, in the sense of “garbage in, garbage out”. Customers choose our platform for its lightweight design and high throughput characteristics, and that’s the trade-off they make.

Nevertheless, the question comes up often enough: can we use Kamailio to remove any SIP headers from obese INVITE messages to tuck them back under the fragmentation threshold of ~1480 bytes implied by the ubiquitous Maximum Transmission Unit (MTU) of 1500 bytes? If so, which ones are safe to remove? What are the implications of doing so?

First, RFC 3261 scripture is quite clear on the true solution to this problem:

   If a request is within 200 bytes of the path MTU, or if it is larger
   than 1300 bytes and the path MTU is unknown, the request MUST be sent
   using an RFC 2914 [43] congestion controlled transport protocol, such
   as TCP. If this causes a change in the transport protocol from the
   one indicated in the top Via, the value in the top Via MUST be
   changed.  This prevents fragmentation of messages over UDP and
   provides congestion control for larger messages.  However,
   implementations MUST be able to handle messages up to the maximum
   datagram packet size.  For UDP, this size is 65,535 bytes, including
   IP and UDP headers.

In other words, if your UDP message is > 1300 bytes or within 200 bytes of the MTU, send it over a TCP channel.

RFC 3261 mandates that all compliant endpoints support TCP, but practically, many don’t, or don’t have it enabled, or it isn’t reachable. While SIP-TCP support is becoming more commonplace and viable for a variety of reasons, driven by improvements in hardware, memory and bandwidth as well as phenomena like WebRTC, it’s neither ubiquitous nor ubiquitously enabled. Then there are intervening factors like NAT. Consequently, it doesn’t represent a viable solution for most operators.

Having ruled out TCP, attention usually turns to SIP headers which are perceived to be nonessential, such as the commonplace:

Date: 17 Dec 2015 12:20:17 GMT
Allow: ACK, BYE, CANCEL, INFO, INVITE, MESSAGE, NOTIFY, OPTIONS, PRACK, REFER, UPDATE
User-Agent: SIPpety SIP BigSIP Rev v0.12 r2015092pl14

As well as generous SDP offers:

a=rtpmap:0 PCMU/8000
a=rtpmap:9 G722/16000
a=rtpmap:8 PCMA/8000
a=rtpmap:18 G729/8000
a=fmtp:18 annexb=no
a=rtpmap:101 telephone-event/8000

“Can we just strip some of this stuff out?”

As it turns out, the answer depends on whether you are pure of heart. Will you give into sinful temptation, or are you one of the faithful few who walks with God?

Formally speaking, proxies are not supposed to modify anything in flight. Section 16.6 (“Request Forwarding”) of RFC 3261 says:

    The proxy starts with a copy of the received request.  The copy
    MUST initially contain all of the header fields from the
    received request.  Fields not detailed in the processing
    described below MUST NOT be removed.  The copy SHOULD maintain
    the ordering of the header fields as in the received request.
    The proxy MUST NOT reorder field values with a common field
    name (See Section 7.3.1).  The proxy MUST NOT add to, modify,
    or remove the message body.

So, no, you’re not supposed to tinker with the message headers in any way.

For those susceptible to Sirens’ songs, Kamailio has quite a few technical capabilities that go above and beyond what a proxy is formally allowed to do. However, I would urge strong caution. The dominant rule here should be: “Just because you can, doesn’t mean you should.” This is not generally heeded by sales-minded managers and executives. But you, as the hapless engineer, have to deal with the technical fallout.

Do keep in mind that all of these SIP headers (notwithstanding obvious unfiltered custom headers beginning with X-) are standardised, and they serve some purpose. However, some purposes are more important than others, and in Plain Old Call setup, not all purposes served by headers are necessarily important. Some message tampering by proxies is relatively inconsequential, in that removing some headers seems to have no material impact on the planet or on the UAC or UAS.

Still, I advise against a cavalier or presumptuous attitude. Axing data from SIP requests and replies that the proxy has a duty to pass through unmodified should be a last resort option, after you have exhausted all possible configuration changes to the UA(s) which could lead to a reduction in message size. This includes:

  • Disable unused or unnecessary codecs so they are not offered in the SDP stanza.
  • Enable compact headers, where possible.
  • Switch to TCP.
  • Increase MTU if signalling is taking place in a controlled LAN environment where you control the path MTU entirely.

But if you must travel down the dark road:

Most header fields should not be removed. Many “MUST NOT” (in the IETF RFC sense of the term) be removed. That said, here are the ones that can be removed relatively safely, in this author’s opinion:

  1. User-Agent / Server – This is a strictly informational field, and nobody is the wiser if it falls in the forest.
  2. Codecs in encapsulated SDP body – As long as the two endpoints can still agree on a common codec, what the far end doesn’t know can’t hurt it, and pruning codecs can result in substantial payload economies. Be very sure that all endpoints are going to have at least one codec they can agree on, and that the resulting agreement would yield desirable results in all scenarios.
  3. Allow – I have not seen any adverse impacts from removing this header in “POTS” INVITE call flows, whose value enumerates every SIP method supported by the endpoint. RFC 3261 itself suggests it can be read as somewhat redundant in non-OPTIONS flows. From Section 20.5:
       The Allow header field lists the set of methods 
       supported by the UA generating the message.
    
       All methods, including ACK and CANCEL, understood 
       by the UA MUST be included in the list of methods 
       in the Allow header field, when present.  The 
       absence of an Allow header field MUST NOT be interpreted 
       to mean that the UA sending the message supports no 
       methods.   Rather, it implies that the UA is not providing
       any information on what methods it supports.
    
       Supplying an Allow header field in responses to methods 
       other than OPTIONS reduces the number of messages needed.
  4. Date – Honestly, nobody cares what time the UA thinks it is. Axe it.
  5. Timestamp – Same as #4, unless you have an environment which actually uses it for round-trip estimates.

I would be very wary of removing Supported, as you simply cannot be certain that something Supported by one side is not Required by another. I would likewise avoid tinkering with Session-Timer-related headers (RFC 4028) for the same reason.

Under no circumstances should other well-known headers be touched.

This is a bandage that will buy some time, and does not address the fundamental issue. Best-practical solutions to this problem remain:

  • Reduce message size through configuration options on the UAs.
  • Use of a reliable transport layer that handles reassembly (aka TCP).
  • Inline B2BUA (yes, CSRP begrudgingly provides this as an option).