Astricon 2014 – Start your engines!

Ok All, this is my official Astricon Countdown – start your engines, as Eric Klein and myself will be attending Astricon this year, Vegas here we come.

So, what are we going to talk about: Security and Cloud computing. Yes, over the past year, I’ve been returning to my old stomping ground, the various cloud infrastructure that is publicly available – and how to exploit it to the max. I will be talking about the various methods of speeding up your clouded Asterisk server, and most importantly, I’ll share some of the methodologies and logic behind building these instances, maintaining them and the various do’s and don’ts I’ve learned along the way.

I’m planning a few surprises and giveaways for my talk, so make sure you stay updated on this page šŸ™‚

 

GreenfieldTech announces the general availability of app_cashmaker for Asterisk

Udim, Israel.Ā April 1, 2009 –GreenfieldTech Ltd., a leading provider of Asterisk solutions and Asterisk training services in Israel, today announced the availability of it’s patented app_cashmaker application for the Asterisk Open Source PBX system. The CashMaker application is intended to be used by various content suppliers, wishing to distribute Audio and Video based content, utilizing their Asterisk server.

The application is built to accept an inbound call into it, then, according to various information gathered in correlation to the callers caller ID and/or inbound DID number, will correlate a relevant content stream directly to the caller. The content distributor doesn’t even have to care about what content to distribute, as the application will connect directly, via the Internet, to a remotely available RTBSP streaming server at GreenfieldTech data center.

“The app_cashmaker application is the result of theĀ cumulative work of over 3 years in the making, testing various content business models and applications. The main problems most content distributors have is how to gather the content and manage it, with app_cashmaker, this requirement is negated, thus allowing the distributor to concentrate on what they do best – flooding the newpapers with ads and marketing material to promote their content delivery service”, says Nir Simionovich, CEO and Founder of GreenfieldTech.

Simionovich indicated that the central content distribution facility is managed via a GTBS cluster environment, implemented partially utilizing Amazon’s EC2 and S3 structures, while utilizing GreenfieldTech’s proprietary streaming and clustering technologies. Currently, GreenfieldTech had submitted 10 different provisional patents, relating to the technologies comprising the app_cashmaker application and service. GreenfieldTech marketing team had indicated that initial beta trials had showed an increase in content availability, via the GreenfieldTech BSC Cloud facilityof over 40% with an increase of almost 80% in content delivery success.

Simionovich estimates that by the year 2010, over 20,000,000 will use the GreenfieldTech app_cashmaker facility, disrupting completely the way mobile, audio and video content is distributed around the world.

Asterisk is the world’s leading open source PBX telephony engine, and telephony applications solution. It offers unmatched flexibility in a world previously dominated by expensive proprietary communications systems. The Asterisk solution offers a rich and flexible voice infrastructure that integrates seamlessly with both traditional and advanced VoIP telephony systems. For more information on Asterisk visit http://www.asterisk.orgĀ 

For more information, please refer to the GreenfieldTech website at http://www.greenfieldtech.net.

Read my words – 3500 concurrent channels with Asterisk!

One of the biggest questions in the world of Asterisk is: “How many concurrent channels can be sustained with an Asterisk server?” – while many had tried answering the question, the definitive answer still alludes us. Even the title of this post says “3500 concurrent channels with Asterisk” doesn’t really say much about what really happend. In order to be able to understand what “concurrent channels” really means in the Asterisk world, let us take a look at some tests that were done in the past.

Asterisk as a Signalling Only Switch

This scenario is one of the most common scenarios in the testing world, and relies upon the basic principle of allowing media (RTP) to traverse from one end-point to the other, while Asterisk is out of the loop regarding anything relating to media processing (RTP). Examine the following diagram from one of the publicly available OpenSER manuals:

Direct Media Path between phones via a SIP Proxy

Direct Media Path between phones via a SIP Proxy

As you can see from the above, the media path is established between our 2 SIP endpoints.

This classic scenario had been tested in multiple cases, with varying codec negotiations, varying server hardware, varying endpoints, varying versions of Asterisk – no matter what the case was, the results were more or less the same. Transnexus had reported being able to sustain over 1,200 concurrent channels in this scenario, which makes perfect sense.

Why does it make sense? very simple, as Asterisk doesn’t manage or mangle RTP packets, Asterisk performs less work and the server also consumes less resources.

Asterisk as a Media Gateway

Another test that people had done numerous times is to utilize Asterisk a Media Gateway. People used it as a SIP to PSTN gateway, SIP to IAX2 gateway, even as a SIP to SIP transcoder gateway. In any case, the performance here varied immensly from one configuration to another, however, they all relied on a simple call routing mechanism of routing calls between endpoints and allowing Asterisk to handle media proxy tasks and/or handle codec translation tasks.

Depending on the tested codec, I’ve seen reports of sustain over 300 concurrent channels of media on a single server, while other claim for around the 140 concurrent channels mark – this again mostly relied on various hardware/software/network configurations – so there is nothing new in there.

These tests tell us nothing

While these tests are really nice in the theoretical plane of thinking, it doesn’t really help us in the design and implementation of an Asterisk system – no matter if it is an IVR system, a PBX system or a time entry phone system for that matter – it simply doesn’t provide that kind of information.

The Amazon EC2 performance test

In my previous post, Rock Solid Clouded Asterisk, I’ve discussed the various mathmatics involved in calculating the RoI factors of utilizing Cloud computing. One thing the article didn’t really tell us, did it really work?

Well, here are some of the test results that we managed to validate:

  • Total number of Asterisk based Amazon EC2 instances used: 24
  • Total number of concurrent channels sustained per instances (including media and logic): 80
  • Average length of call: 45 seconds
  • Total number of calls served: 2.84 Million dials
  • Test length: approximately 36 hours

According to the above data, each server was required to dial an approximate 3300 dials every hour. So, let’s run the math again:

  • 3300 Diales per hour
  • 55 Dials per minute
  • As each call is an average of 45 seconds, this means that each gateway generates 20 calls
    per second, and within 4 seconds fills the 80 channels limit per server.

According to the above numbers that we’ve measured, each of the Amazon EC2 instances used was utilized to about 50% of its CPU power, while consuming a load average of 2.4, which was mostly caused by I/O utilization for SIP and RTP handling.

Conclusion

When asking for the maximum performance of Asterisk, the question is incorrect. The correct question should be: “What is the maximum perfromance of Asterisk, utilizing X as the application layout?” – where X is the key factor for the performance. Asterisk application performance can vary immensly from one application to another, while both appear to be doing the exact same thing.

When asking your consultant or integrator for the top performance, be sure to include your business logic and application logic in the Asterisk server, so that they may be able to better answer your question. Asterisk as Asterisk is just a tools, asking for its performance is like asking how many stakes a butcher’s knife can cut – it’s a question of what kind’a steaks you intend on cutting.

So long SigValue – Hello Asterisk + EC2!

As some of you may know, I’ll be attending the ITExpo in Miami Beach, Florida. The subject I’ll be lecturing about is “Virtualizing Asterisk”. However, I have to be honest, I really need to change the subject to be called “Asterisk in the Cloud“.

Ever since the introduction of Amazon EC2, people had been trying to get Asterisk to run properly inside an EC2 instance. While installing a vanilla Asterisk on any of the Fedora/RedHat variant instances in EC2 isn’t much of a hassle, getting the funky stuff to work is a little more tricky.

One of these tricky bits (which I hadn’t yet found a solution for) is the issue of supplying a timer for Asterisk’s MeetMe application. In the old days (prior to Asterisk 1.6), Asterisk required the utilization of a virtual timer driver, provided by Zaptel in the past and now the DAHDI framework. The problem is, that while you are fully capable of compiling and installing DAHDI on an Amazon EC2 instance – the problem starts once you want to use it.

A few words about Amazon EC2

For those not familiar with Amazon EC2, its general infrastructure is based upon the XEN virtualization project. XEN is a para-virtualization framework, meaning that is performs some of the work utilizing the underlying Operating System kernel and some of the work performed with a special Kernel in the virtualized Operating System instance. This poses an interesting issue with every type of application that relies on hardware resources and their emulation.

To learn more about the XEN project, go to http://www.xen.org.

So, where’s the big deal?

So, if you can compile your code and run it in an instance, as long as you have the kernel headers and kernel source packages – you should be just fine – right? WRONG!

Amazon EC2 deploys its own Kernel binary image upon bootstrap, causing what ever compilation you may have done to the Kernel to go away (unless you’re creating a machine from real scratch). Another issue is a version skew between the installed Operating System kernel modules, the actual kernel and the installed compiler. For example, the instance that I was using had the XEN capable kernel compiled with gcc version 4.0.X, while the installed operating system was gcc version 4.1.X – so, no matter what I did to compile my kernel modules or binary kernel, I would always end up in a situation where loading the newly compiled kernel modules will generate an error.

Did I manage to solve it? – NOT YET. I’m still working on it, and I have to admit, that considering the fact that I have over 10 years of Linux experience and had compiled kernels from scratch many times, this one has gotten me a little baffled – I guess I’ll just need a few more nights and a case of Red-Bull to crack this one open.

So, what can we do with EC2?

In my view, EC2 + Asterisk is the ultimate IN/NGN services environment – and I have proof of that. A recent lab test that I did with one of my customers showed a viable commercial alternative to Sigvalue when using Asterisk and EC2 structures. The main reason for our belief in using EC2 was the following graph:

IN/NGN usage over 24 hours

IN/NGN usage over 24 hours

What we’ve noticed was that while our IN/NGN system was generating traffic, it’s general usage showed peak usage for a period of 2.5 hours, with a gradial increase and decrease over a period of almost 10 hours. Immediately that led us to a question: “Can we use Amazon EC2 to provide an automatd scaling facility for the IN/NGN system, allowing the system to reduce its size as required?”

To do this, we’ve devised the following IN/NGN system:

Amazon EC2 Enabled IN/NGN Platform

Amazon EC2 Enabled IN/NGN Platform

Our softswitch would have a static definition of routing calls to all our Asterisk servers, including our EC2 instances which had static Elastic IP numbers assigned to these. The EC2 Controller server was incharge of initiating the EC2 instances at the pre-defined times, mainly, 30 minutes prior to the projected increase in traffic. Once the controller reaches its due timer, it will automatically launch the EC2 instances required to sustain the inbound traffic.

For our tests, we’ve initiated 5 AMI instances, using the EC2 c1.medium instance. This instance basically includes 2 cores of an AMD opteron, about 8GB of RAM and about 160GB of Hard drive – more than enough. Initially, we’ve started spreading the load evenly across the servers, reaching about 80 concurrent channels per instance, and all was working just fine. We managed to reach a point where we were able to sustain a total of about 110 concurrent channels per instance, including the media handling – which is not too bad, considering that we are running inside a XEN instance. The one thing that made the entire environment extremely light weight is the GTx Suite of APIs for Asterisk. Thanks to the GTx Suite of APIs, scalability is fairly simple, as all application-layer logic is controlled from a central business logic engine, serving the Asterisk servers via an XML-RPC based web service. Thanks to Amazon, practically infinite, bandwidth allocation – the connections from the Asterisk servers to the US based central business logic was set at a whopping 25mSec, thus, there was no visible delay to the end user.

It is clear that the utilization of Asterisk and EC2 operational constructs can allow a carrier to establish their own IN/NGN environment. However, how these are designed, implemented and operated are at the hands of the carrier – and not a specific vendor. If the carriers around the world will take to this approach, time will tell. As a recent survey stated that 18% of the US PBX market is currently dominated by Open Source solution, having Digium dominate 85% of these 18% (~15%), I’m confident that we will see this combination of solutions in the near future.