As the world around changes, services are rapidly changing from human rendered services, to bots and applications that run on your mobile device. Ranging from your local pizza shop, to a multi-billion corporation – all are rapidly moving to the bot/application economy paradigm – in order to facilitate growth and lower their TCO.

According to SkyHigh Networks study, the following may come as a shock to most – but most  enterprises will use up to 900 different cloud applications. These require an amazing number  of over 1,500 different cloud services in order to work. Out of these 1,500 cloud services, a group of 50 top-most cloud services can be observed, normally relating directly to infrastructure – we’ll call these “Super Clouds”.

The “Super Clouds” can be divided into several “Primary” groups:

– Infrastructure Clouds (Amazon AWS, Google Compute, Microsoft Azure, etc.)
– Customer Relation Clouds (Salesforce, ZenDesk, etc.)
– Real Time Communication Clouds (Twilio, Nexmo, Tropo, etc.)

It is very common for a company to work solely with various cloud services – in order to provide a service. However, using cloud services has a tipping point, which is: “When is a cloud service no longer commercially viable for my service?” – or in other words: “When do I become Uber for  Twilio?”

Twilio’s stock recently dropped significantly, following Uber’s announcement – http://bit.ly/2rVbzxG. Judging from the PR, Uber was paying Twilio over $12M a year for their services, which means that for same cash, Uber could actually buyout a telecom company to do the same service. And apparently, this is exactly what’s going to happen, as Uber works to establish the same level of service with internal resources.

Now, the question that comes to mind is the following: “What is my tipping point?” – and while most will not agree with my writing (specifically if they are working for an RTC Cloud service), every, and I do mean EVERY type of service has a tipping point. To figure out an estimate your tipping point, try following the below rules to provide an “educated guess” of your tipping point – before getting there.

Rules of Thumb

  • Your infrastructure cloud is the least of your worries
    As storage, CPU, networking and bandwidth costs drop world-wide – so does your infrastructure costs. IaaS and PaaS providers are constantly updating prices and are in constant competition. In addition, when you commit to certain sizing, they can be negotiated with. I have several colleagues working at the 3 main competitors – they are in such competition, where they are willing to pay the migration prices and render services for up to 12 or 24 months for free, in order to get new business.
  • Customer Relation Clouds hold your most critical data
    As your service/product is consumer oriented, your customers are your most important asset. Take great care at choosing your partner and make sure you don’t outgrow them. In addition, make sure that if you use one, you truly need their service. Sometimes, a simple VTiger or other self hosted CRM will be enough. In other words, Salesforce isn’t always the answer.
  • Understand your business
    If your business is selling rides (Uber, Lyft, Via, etc), tools like Twilio are a pure expense. If your business is building premium rate services or providing custom IVR services, Twilio is part of your pricing model. Understand how each and every cloud provider affects your business, your bottom line and most importantly, its affect on the consumer.

Normally, most companies in the RTC space will start using Amazon AWS as their IaaS and services such as Twilio, Plivo, Tropo and others as their CPaas. Now, let us examine a hypothetical service use case:

– Step 1: User uses an application to dial into an IVR
– Step 2: IVR uses speech recognition to analyze the caller intent
– Step 3: IVR forwards the call to a PSTN line and records the call for future transcription

Let us assume that we utilize Twilio to store the recordings, Google Speech API for transcription, Twilio for the IVR application and we’re forwarding to a phone number in the US. Now, let’s assume that the average call duration is 5 minutes. Thus, we can extrapolate the following:

– Cost of transcription using Google Speech API: $0.06 USD
– Cost of call termination: $0.065 USD
– Cost of call recording: $0.0125 USD
– Cost of IVR handling at Twilio: $0.06 USD

So, where is the tipping point for this use case? Let’s try and separate into 2 distinct business cases: a chargeable service (a transcription service) and a free service (eg. Uber Driver Connection).

  • A Chargeable Service
    Assumption: we charge a flat $0.25 USD per minute
    Let’s calculate our monthly revenue and expense according to the number of users and minutes served.

– Up-to 1,000 users – generating 50,000 monthly minutes: $12,500 – $9,625 = $2,875
– Up-to 10,000 users – generating 500,000 monthly minutes: $125,000 – $96,250 = $28,750
– Up-to 50,000 users – generating 2,500,000 monthly minutes: $625,000 – $481,250 = $143,750

Honestly, not a bad model for a medium size business. But the minute you take in the multitude of marketing costs, office costs, operational costs, etc – you need around 500,000 users in order to truly make your business profitable. Yes, I can negotiate some volume discounts with Twilio and the Google, but still, even after that, my overall discount will be 20%? maybe 30% – so the math will look like this:

– Up-to 1,000 users – generating 50,000 monthly minutes: $12,500 – $9,625 = $2,875
– Up-to 10,000 users – generating 500,000 monthly minutes with a 30% discount: $125,000 – $48,475 = $57,625
– Up-to 50,000 users – generating 2,500,000 monthly minutes with a 30% discount: $625,000 – $336,875 = $288,125

But, just to be honest with ourselves, even at a monthly cost of $48,475 USD, I can actually build my own platform to do the same thing. In this case, the 500,000 minutes mark is very much a tipping point.

  • A Free Service
    Assumption: we charge a flat $0.00 USD per minute
    Let’s calculate our monthly revenue and expense according to the number of users and minutes served.

– Up-to 1,000 users – generating 50,000 monthly minutes: $9,625
– Up-to 10,000 users – generating 500,000 monthly minutes with a 30% discount: $48,475
– Up-to 50,000 users – generating 2,500,000 monthly minutes with a 30% discount: $336,875

In this case, there is just no case in building this service using Twilio or a similar service, because it will be too darn expensive from the start. Twilio will provide a wonderful test bed and PoV environment, but when push comes to shove – it will just not hold up the financial aspects.This is a major part why services such as Uber, Lyft, Gett and others will eventually leave Twilio type services, simply due to the fact that at some point, the service they are consuming becomes too expensive – and they must take the service back home to make sure they are competitive and profitable.

When Greenfield started working on Cloudonix – we understood from the start the above growth issue, and that’s why Cloudonix isn’t priced or serviced in such a way. In addition, as Cloudonix includes the ability to obtain your own slice of Cloudonix or even your own on premise installation – your investment is always safe.

To learn more about our Cloudonix CPaaS and our On-premise offering, click here.

For the better part of the past 15 years, I’ve been a PHP developer. Really, I’ve developed everything in PHP, ranging from server side services, web services, backends – you name it, I’ve probably done it with PHP. Don’t get me wrong, I love PHP and it will always remain my language of choice for doing things really fast.

However, for the past year I’ve been increasingly developing with Python. I’ve always dabbled with Python, but never really had the chance to truly get down and dirty with it. Due to a couple of projects during the past year, specifically ones that involve Google AppEngine, I’ve had to sharpen my Python skills and get to a point where I can develop with the same agility that I have with PHP. Honestly, it wasn’t simple – sometimes I truly wanted to strangle someone with various errors a framework can spit at you. However, once you get around to reading the various cryptic messages Python may spit at you, getting around it and working with it is truly a delight.

So, why do I think Python should be the first language one learns? so here are my thoughts:

I started my coding days with BASIC, to be more accurate GW-BASIC (yes, I am that old). From that I moved to Pascal (Turbo Pascal to be more accurate), then C, then C++, C++ Builder, Visual C++ (yes, I did MFC at some point in my life as well). I then decided that my life is in the open source world – and thus, the track then went to PERL, JAVA and of course PHP. Honestly, somewhere around 2005, the mixture of C, JAVA and PHP truly gave me all the power I needed to do my job – so, I didn’t really find the time to learn a new language.

Then, about a year ago, I decided it’s high time to learn something new – specifically, I became increasingly interested in the Google AppEngine platform. Yes, I’ve been using Google Compute and other cloud platfroms for a few years now, I’ve used most of Amazon’s services, ranging from EC2 up to RedShift and their hosted Hadoop clusters. But when Google AppEngine came out, it only had Python, Java and GO to work with. Java is the least favorite language in my tool box – honestly, I hate it. I’ve never coded in GO, and didn’t really feel like starting out with it. And Python, well, I dabbled with it – but can’t say I’ve done something too serious with it. In 2014, Google added PHP support to Google AppEngine. Damn, that sounds cool – let’s play around with that. So, I built a few applications atop of AppEngine and the PHP SDK. I rapidly realized that while the PHP SDK gives you some power, Python is the more natural choice for AppEngine. So, I more or less sat my ass down for 3 days and decided to teach myself proper Python.

Took me about 3-4 days to get around the quirks of AppEngine and how to get it up and running using PyCharm (if you use Python, by far the best IDE I’ve seen). Then building up my first application, then migrating an existing application (a fairly big one), from PHP to Python on AppEngine. I then rapidly moved along to using easy_install, pip and the other Python tools that make life so easy for developers – honestly, right now, I can’t figure out why use anything else other than Python for shell environment tools. But, regardless of that, I honestly think Python is the first language you should teach students, not C/C++, not JAVA, not Ruby and surely not PHP (and I’m a huge PHP advocate).

Why do I say this? here are my main reasons:

  1. Python is objected oriented from the ground up, which means, that teaching object oriented programming using Python is easy and straight forward for new comers.
  2. Python is strong typed, which means that syntactical issues are dealt harshly – promoting proper usage of syntax, indentation, capitalization, variable handling – all the nice things that make good code – readable code.
  3. Python’s physical typing construct, where blocks of code must be tabulated in specific manner in order to make the code work in specific manner – is GENIUS. I’m very much a “Source Code Nazi” (Imagine that coming from a Jew, right?). For me, indentation, proper loop blocks, proper case blocks, making sure things are wrapped really tight without too many white spaces – this is what makes code look nice.
  4. Python is interpreted, not compiled – but yet, it is strong enough to hold the most complex multi-threaded of tasks.

In other words, if you take the above and teach to a new developer, someone who writes code for the first time in his life – your result will be a developer, that may not dish the best code at first (after all, a beginner), but it will be readable, manageable and maintainable. Python automatically promotes these by its structure, by its rigidness and by its agility at the same time.

As part of my academic studies, I’ve studied education and how to teach computer science to high school students. I’ve learned that you should start with Pascal or C, then move to Object Oriented, then move to more advanced stuff. I have one thing to say: BULLSHIT! Honestly, the first thing you need to teach is Python, after Python, the rest are just syntax. Nothing more, nothing less – pure, simple, straight forward syntax.

Would love to hear your opinion on this one…

Version control! one of the most controversial subjects in the software industry. Whether you’re a Subversion fanatic, a Git hard core or a mercurial elitist – everybody has something to say about version control. While in the past we had put our trust in local CVS and SVN repositories, today, most of use utilize cloud based services such as Github, BitBucket or Gitlab.

After spending much time this week setting up our new gitlab repositories – mainly for finished projects that are no longer in active development, and can be removed from our quota at Github, I cam to realize that all these companies are somewhat at a position to be considered as “anti-trusted”. Imagine a hypothetical situation where github starts examining the code we submit to it, not only the public one, but also the private one. Imagine what kind of intellectual property assets they have access to.

In 2001, Tim Robbins portrayed a software giant CEO that is so driven by ambition and greed, that he is actually willing to have developers killed for their code. Where in 2001 developers were very much working in closed quarters and sharing their work via privatized means, today, almost all of us use the clouds in some form. Can they be trusted? What happens if one of them gets bought out by a software giant?

Let us imagine the follow scenario:

The GitGiantCloud (GGC) service has been recently acquired by MegaGreedySoftwareCorp (MGSC). MGSC announces that it will continue to run GGC as always, however, in the background they start analyzing the code within the privatized repositories – completely violating their EULA. Would anyone know about it? the answer is NO. Is it considered a breach? well, they can always excuse it as: “we identified a potential breach, we had to take these measures to investigate it”. In other words, even if they are reading your code – you’ll never know if it’s true or not.

For many years, the question of high availability had always circled the same old subject of replication – how do we replicate data across nodes? how do we replicate the configuration to stay unified across nodes? Is active-active truly better than active-passive? and most importantly, what happens beyond the two node scenario?

Since the inception of the Linux-HA project (and I do believe it’s been around for years now – over 15 years), it has been the pivotal tool for creating Linux based high-availability clusters. Heartbeat, Stonith and Mon will take care of floating the IP numbers and services across – no biggy there, making sure the data is consistent across the board, that’s something completely different. Recently, one of the better known Asterisk Commercial offerings had launched an Asterisk-HA solution – it’s been long due – it’s just a shame it’s a commercial offering without an Open Source derivative, after all, it is Open Source based (I hope).

However, being a high availability solution on one hand, doesn’t mean you are truly a clustered solution – it is an active-passive solution, with a major caveat (at least as I see it), that if your data sync fails for some reason, you end up with a split-brain issue – and your entire solution is now made moot. Don’t get me wrong here, I think that for now, the solution is the next best thing to sliced bread, simply because there is no other solution out there. However, the fact this is the only solution, doesn’t make it the right solution.

What does federating mean in this respect? it means that data doesn’t need to be replicated across the board, it is automatically trickled across the network, making sure all nodes in the network have clear visibility for it. If a node fails inside the cluster, client automatically redirect themselves to a new node, no need for floating IP numbers. Call routing is automatically determined upon request and are never preset for the entire platform. And most importantly, the amount of data traversed between the nodes is as minimal as possible, preventing excessive usage of network resources and I/O.

What would it mean to federate the configuration of a PBX system? first of all, make sure each unit is capable of working on its own, information should be trickled across the nodes via two methodologies: A multicast/broadcast mechanism (for local LAN connected nodes) and a Published/Subscriber relation (for externally connected nodes). When a change is made to any of the systems, that change is then replicated to all the systems. The configuration is never fully transmitted between nodes (apart from a new node joining the cluster). Routing decisions are dynamically made across the network, they are not predetermined or preconfigured. There is no need to keep the cluster nodes in perfect physical alignment, mixing hardware specifications should be considered the norm. External devices should be able to “speak” to the cluster, without being aware of its existence.

Once we achieve all of the above, we’ll truly get to a point where we’ve clustered Asterisk (or another open source project) the right way.

So, Astricon 2014 is over and behind us, so now I’m now sitting at the Holiday Inn in Chicago. I have to admit that moving from the RedRock resort and Casino to the Holiday Inn in Chicago – talk about a mind blowing change. Just to give a general idea, the bath room in Vegas was roughly the size of the entire room here (mental note to self – next time order something better via BA miles).

So, this years’ Astricon was, at least for me personally, one of the best I’ve been to. Various topics that I’ve started talking about years ago, had finally made their way to the public’s ear, and the community and adopters are finally picking up on these. Security, privacy, cloud computing, proper usage of Linux and virtualization – these are now become the predominant subject people are confronted with.

Unlike previous years, I decided to talk about Cloud computing and some tips from the Cloud front line. Cloud computing, specifically cloud based servers are and infrastructure that many want to use – but very few truly understand what it means. What kind of impact does SWAP have over your instances, what is the swapiness value? and why the hell would I choose one cloud over another – aren’t they all the same at the end?

This year, we had the first ever Astrion Hackathon. I’ve participated in several Hackathons in the past, but this was very special to me. While in most Hackathons I’ve participated the participants never knew each other (well, at least 95% of them), here, most participants knew each other – some on a very personal basis. As you know, my latest Open Source passion is my own pet project – phpari. My hack for the contest was a phpari sandbox, imagine it to be a cross between jsfiddle, Asterisk and PHP. A simple use playground, where you can try various parts of ARI in general and the toolkit in particular. Much to my surprise (as there were other strong candidates), the phpari sandbox won the “Asterisk Developer’s Team” Award, for best use of Asterisk during the Hackathon. To me personally, it means a whole lot. I’ve been dealing and working with Asterisk for over 12 years now, in fact, I was joking around with Corey Mc’fadden that we are currently, probably the oldest Asterisk community members around – well, probably oej, joshc and a few others are as old as us. We never had a chance to actually see how we work together, how we think about various problems and challenges. This was the first ever time we’ve got to see each other work, how we work, how we look at things – it was exciting. Looking at Tim Panton as he battles the various concepts of Respoke and he’s application – trying to figure out exactly why “Respoke” didn’t work as he expected (amusing to say the least).

So, after Astricon, we spent the last evening going out to the Vegas Strip. I have one thing to say right now: “I don’t think I like Vegas all that much”. It’s just too much of everything. Too much “Putti’n on the Ritz” facade, too much commercialism of everything and anything, just too much for me. Don’t get me wrong, it’s an interesting place to visit, but I don’t believe that being there more than 2-3 days is required in order to appreciate the place. Be it the lights that are always bright, making you believe it is day light, the hotel that literally had no windows to the outside – so you won’t know if it’s day or night, the entire system gets screwed up totally.

So, during the night of the “geeks take over Vegas”, the following group of people decided to head to the strip:

  • Allison Smith
  • Peter – Aka: Mr Allison (hey, what do you want, you’re married to the voice of Asterisk)
  • Ben Klang (Adhearsion/Mojo-Lingo)
  • Evan (sorry, can’t recall the rest)
  • Steve (Mojo-Lingo)
  • Dan Jenkins (Respoke)
  • Eric Klein (My partner in crime)
  • Correy McFadden (Venoto)
  • Beth – Correy’s Wife
  • Steve (From South Africa)

So, here we are sitting at the cosmopolitan waiting for our table to the STK. Once we got it (at 10:45PM), we sat down at the stools waiting for our table. At the table next to us, a man and two young ladies were definitely getting it on. To be more descriptive, apart from actually going at it in front of us all, they were all over the place. As they say, what happens in Vegas – stays in Vegas. But what happens at a public restaurant, don’t be surprised to find it on Twitter. Coming to think about, we should have videoed the entire thing. Now, don’t get me wrong, I’m as much a man as the other guy, and I admit that the display was interesting (so say the least) – but comm’on, we’re a public place – get a bloody room. The funny bit was that Peter came back from the rest rooms, saying that he was delayed due it being occupied. When the door opened, two girls walked out of the same compartment – and I’ll let your imagination continue from here. So, as Eric commented on Trip Avisor – the music was loud, the service was slow – but the Steak WAS PERFECT. In deed, one of the finest steaks I’ve had in a long time.

One more thing I need to mention in our dinner (Eric and Myself) with John Draper – aka: Capation Crunch, but that’s a whole different story all together.