dimanche 25 décembre 2016

Data driven communication

I've noticed that in our communication, we often tend to pass judgment, which is processed information, instead of passing raw information. That is unfair to the ones that we are speaking to, because we do not give them the chance to have their own judgement. Not only it's unfair, but it is also source of misunderstandings and conflicts:
  • Upset customer about service: "Your network's quality is not good enough, we are receiving complaints from our users"
  • Excited business developer about a new opportunity: "Management should support me, this project will generate around 1M€ of revenues, huge one!"
  • Angry work colleague: "You did not reply to my emails several times this week"
In all of the above examples, sent information is not objective. I will give in the following some tips for achieving healthier communication.


The first golden tip would be: "A little less adverbs, a little more data!". Adverbs (too much, extremely, slightly, highly, nearly, very, quite, sometimes, seldom, often) without supporting data are pure judgment. Consider the upset customer again rather telling you, his network provider: "In the last week, the latency on your network was 150ms on average between our offices in Dubai and Paris, causing up to 10 user complaints regarding calls quality". In the first case, you are probably opening a ticket with your technical services in order to correct the issue, but in the second case, you are probably going to compare noted performance to contracted SLAs, might find it compliant, and thus propose a new feature (VoIP QoS) in order to improve the customer experience. In the first case, you are loosing time, whereas in the second you are generating new revenues.

The second tip is backing absolute values with references for better appreciation. For example, compare sales growth rate of this quarter to the same quarter of last year and compare churn rate of your customer base to your industry average. Consider our excited business developer again: 1M€ is indeed a lot of revenues, but his company has limited resources and got in its pipe some other projects generating 10M€ each. It wouldn't be surprising that the 1M€ project will not get priority (a small confession: it's a true personal story!).

The third tip is providing information about samples as well as about the population that we are sampling. Consider the angry colleague again. It's true that this week you didn't answer 5 of his emails, but he globally sent you over 100 mails, which means that you had around 95% responsiveness rate! the sample here is unanswered emails, but the population is sent emails. That would completely calm your colleague :)

Using data in our communication is not about inhibiting the expression of our thoughts and emotions. On the contrary, you are free to add your own subjective judgement, as long as you provide the full picture backed with data. Not only you will give your interlocutor the chance to have his own judgement, but sometimes, by using objective data, your own perception and judgement can change. This is typically the case when your judgement is biased by temporary emotions.

In this post, I gave mostly work related examples, but it's applicable to everyday communications: Better say "I love you like the ocean" than "I love you a lot" hein? :D

jeudi 8 décembre 2016

Linkedin Influencers & Silent Evidence

This post will be a short one!

Many of us follow influencers on Linkedin, like CEOs of big companies such as Bill Gates, Jeff Bezos and Mark Zuckerberg. We follow their advice, read their "5 rules to change your life", and share their posts with our community. These people inspire us, and make us want to follow their path to replicate their success pattern in our careers. For example, I personally follow Laszlo Bock, Google's VP of Humain Resources, as I found interest in his advice around resume and job application. 

Nevertheless, following influencers to understand success is biased and can probably lead to wrong conclusions. It is the survivorship bias, or also silent evidence as called by Nassim Nicholas Taleb in his book Black Swan. He illustrated it by the following story:

Diagoras, a nonbeliever in the gods, was shown painted tablets bearing the portraits of some worshippers who prayed, then survived a subsequent shipwreck. The implication was that praying protects you from drowning. 
Diagoras asked, “Where are the pictures of those who prayed, then drowned?”




Do we know how much people applied the "5 rules to change your life" but failed and never made it to be Linkedin influencers? Until we do, we should stay skeptical, and do not take whatever a top influencer gives us as truth, but rather try to validate it.

jeudi 17 novembre 2016

CDN benchmarking

Today, when you want to compare the performance of different CDN providers in a specific region, your first reflex is to check public Real User Monitoring (RUM) data, with Cedexis being one of the most known RUM provider. This data is very useful, and some CDN providers buy it in order to benchmark with other competitors and work on improving performance when there are gaps.



I will highlight in the following what exactly RUM measures, so you do not jump quickly to some unprecise conclusions. Let's focus on the latency performance KPI and list the different components that contribute to it:
  • Last mile network latency to CDN Edge on the ISP network, which reflects how near is it to the user.
  • Caching latency, which is mainly due to CDN Edge not having the content and must go back to the origin to fill it.
  • Connectivity latency from CDN edge to Origin.


In general, RUM measurements are based on calculating the time (RTD) it takes to serve the user a predefined object from CDN Edge. Since the object is the same and doesn't change, it's always cached on edges, thus measurements reflect solely the first mile network latency. But that's not all of the picture, because in real life CDN edges needs to fill content from the origin:
  • According to the cache purge policy and the disk space available, the request will be a cache miss. The less TB capacity is present on the Edge, the more will be the caching latency of the CDN.
  • According to CDN backbone, the more hops you need to cross to reach the origin, the more will be the connectivity latency. On this aspect for example, tier 1 IP networks who provide CDN services are very optimized.
For highly cachable content, the comparaison based only on the first mile latency makes sense, but it has limits when it's not the case, such as for long tail video streaming or dynamic content.

lundi 14 novembre 2016

Sales Engineer, modus operandi

Recently, a friend asked me : "What qualities would you look for if you had to recruit someone in the same position as you, i.e. Sales Engineer?", and of course, to make it easier, he asked me how to evaluate these qualities. In this post I will try to answer the question, which is a very interesting one, because it pushed me to stand back a bit and think about my role with detachment.

Sales Engineer (SE) role can be quite different from a company to another and with different titles: Solutions Engineer, Presales Engineer, Solutions Architect, Consultant... In fact, SE can be more or less specialized/generic, more or less involved in delivery, in pricing, in bid management...  In a nutshell, SE, part of the sales team, is a professional who provides technical advice and support in order to meet customer business needs. It seems nowadays that companies are having difficulty finding such profile who combines business acumen and extensive technical knowledge.




The first word in SE is "Sales", but let me start by "Engineer", because technical knowledge is the solid foundation on which trust is built with customers. Indeed, SE must be ready to dive in a technical subject as deep as required by business, understand customer problems and solve them. Nonetheless, static knowledge is not sufficient in a fast-paced and changing technological landscape: This is where the curiosity of SE and his passion for learning are vital for his "survival".
Let's take the case of SE specialized in CDN. He knows very well how internet works, TCP/IP stack, DNS system and HTTP protocol.  He can explain on high level how caching works, but also can dig into HTTP RFC 2616 if needed to answer a specific question about caching. He is following latest trends in his industry such as H2, TLS, SDN, security and looking closely at what is being done at the competition.

So, back to the "Sales" in SE. First, the company counts on the alignment of SE to sales targets, as well as on his strategic thinking in order to create competitive advantage for its products. Second, SE is regularly doing presentations or demos to customers, thus he needs to have good communication skills, to excel at storytelling and to be attentive to his public in order to adapt in real time. Finally, I would say that SE must handle stress and pressure due to sales dynamics. A typical assessment of this skill set would be asking the SE to make a presentation and challenge him during it.

Now the best of SE is in the synergy between "Sales" and "Engineer". Being able to deliver a technical pitch with a variable depth, SE ties relations and build trust at different levels within customer organization. For exemple, our CDN SE brings value to customer's CTO by explaining how its product can help him to increase revenues by improving the online buyer experience, or cope with Christmas load on his infrastructure, and in the same time spend time advising customer's website admin on caching best practices for an optimal CDN setup. By discussing with customer, and asking the right questions, he is able to translate business requirements into a technical solution.

In addition, some other skills are very nice to have in this role, such as coaching partners, training sales, managing some projects... 

I am lucky I have the opportunity to be in an SE position since more than 5 years now, which is a position that sits in a special place within the organization, at the crossroad of sales, engineering, business development, product management.... This role has changed me a lot and I already feel the opportunities that are opening to me.

mercredi 2 novembre 2016

How to evaluate a DDoS mitigation solution?

Let me start by this funny story. Marie, a 16 years old school student, was our guest in the office for a week to discover the professional world. We explained to her about our business, networks, internet... but when we started talking about IT security threats, we were hilariously surprised: She confessed about having already launched a DDoS attack on the school website, so her parents can not access her results on that day where grades were available online!!!! 

With almost no entry barriers for launching DDoS attacks nowadays, the industry is witnessing considerable growth in the number and size of attacks. Unprotected connected objects have even driven this growth exponentially with IoT infected botnets being massively used as attack vector. In the last 30 days, KrebsOnSecurity  got 600 Gbps attack, OVH over 1 Tbps attack. The last attack was on Dyn DNS provider whose failure impacted major internet services such as Netflix. Mirai Malware was used to launch this attack, by scanning and infecting more than 500 000 connected cameras, DVRs...

To protect themselves, companies are dedicating a larger percentage of their budgets to security and thus creating a very attractive emerging business for providers from different horizons. We can list vendors who started providing services based on their technologies (Abor, Radware), network operators (Level3, Tata), CDN providers (Limelight, Akamai), Security providers (Incapsula, Cloulfare) and cloud providers (Azure, Rackspace).

Each positioning and implementation has its strengths and weaknesses. In the following I'll share with you some key technical elements to take into consideration when you are evaluating a DDoS mitigation solution. 

I'll start first by a quick description of DDoS attack layers. Indeed, attacks target ressources at different layers, each one of them is critical for service continuity. Volumetric attacks either try to flood internet bandwidth mostly with reflection mechanisms (DNS/NTPCHARGEN..) or overwhelm frontal network equipment, for example by exhausting router CPU with packet fragmentation. In the upper layers, an attacker can target the middleware such as HTTP server by brute force GET requests and slow session techniques, or the application layer directly by well crafted HTTP requests that exhausts the application logic or its database. 




Providers should be able to protect from DDoS attacks on different layers:
  • Protection from volumetric attacks requires a considerable infrastructure capacity in terms of network and scrubbing centers.
    Indeed, scrubbing centers count and geographical distribution is critical to absorption capacity and robustness, as they would mitigate an attack the closest to it's source before forming an avalanche, which is much riskier to handle afterwards. It is the case of a provider lacking presence in some regions like APAC, or having only one scrubbing center in a specific region. With the same concerns, scrubbing centers should be connected to internet through extensive peerings and network capacity.
    On this ground, tiers 1 operators have the best position to deal with the larger attacks (e.g. 1Tbps) thanks to their scale. For example, Level 3 has implemented BGP flowspec on its backbone, thus leveraging its edge capacity (42Tbps) to block some volumetric attacks before even they get into scrubbing centers.
  • Protection from upper layers DDoS attacks is more about the intelligence in the scrubbing centers.  Some providers use proprietary technologies like Radware, Arbor or Fortinet, some mix them for better security, and some simply do not use any to avoid licencing fees and thus be more competitive price-wise. What is the underlying technology capable of? signature based only or enabled with behavioral analysis? Manages SSL traffic? false-positive ratio? Compatible with hybrid (cloud+on premise) implementation?
  • In all cases, mitigation should be powered by threat intelligence capabilities. For example, a botnet can be identified before any attack by its communication profile with C&C servers, and the associated infected IPs are fed to the mitigation technology.

One last thing I want to mention is performance. It's not enough that providers stop a DDoS attack, they shoud guarantee that normal traffic won't suffer from performance issues. Let me illustrate by some examples:
  • A large percentage of your traffic is very local in a region (let's say Middle East) where a provider does not have a scrubbing center. That means your traffic will go to Europe to be scrubbed, and then back to the Middle East, thus adding considerable latency. 
  • Your provider has only one scrubbing center in Europe, and gets critically impacted by attacks on several banking customers of his. In this case, your traffic will be rerouted to the nearest scrubbing center, for example in North Amercia, thus adding considerable latency.
  • Routed mitigation solutions uses BGP to divert your traffic on a /24 subnet from your AS to the AS of your provider who will clean it and send it back to you. First thing to consider is the BGP convergence time because it impacts the global time to mitigate. Convergence time decreases when your provider is very well connected to internet, and can even be instantanous if you are using the same provider for internet connectivity. Second thing to consider is the impact of rerouting all of your /24 subnet when only one host is targeted. Does your provider give you the possibility to reroute only the attacked IP?
  • Your provider is using a scrubbing technology that requires intensive tweaking and humain intervention per attack mitigation. In this case, you can expect a longer time to mitigate.

lundi 10 octobre 2016

Networks & economic paradigms


I will start by the above revisited version of Maslow's pyramid for human needs. It's a funny expression of how internet is now a basic need, making all of us "data" consumers. Data delivery to consumers is organized in different ways, according to different economic models. In the following I will go through these different ways and their correlation to known paradigms for organizing economy: liberalism, centrally planned economy & participatory economy.

The first paradigm is current internet decentralized organization which is based on liberalism or free trade, the dominant ideology nowadays. A user is connected to internet through eyeballs, e.g. the local ISP or mobile operator. Now eyeballs are connected to internet via different kinds of peerings:
  • Directly peer with content providers such as Google, Amazon & Netflix,
  • Peer with other regional eyeballs to exchange traffic directly,
  • Peer with backbone providers such as Level 3 & Cogent who globally connect eyeballs together.
The dynamics driving network meshing are very interesting. An eyeball has many questions to answer in order to guarantee a good internet connectivity and a profitable business:
  • Which of the above peering kinds should we build? in which breakdown?
  • With which networks should we peer? what capacity? private peering or through internet exchanges?
  • In which geographical locations should we peer with a considered network? in which carrier hotels?
  • What is the cost of transport network to those locations?
  • Should we pay for a peering or is it free?
  • How should we diversify peerings to guarantee resiliency?
The main driver of building internet networks is making profit because it is managed by private sector. Meanwhile, we have witnessed competition, price compression, innovation, rise of broadband.... But making profit means only serving solvable consumers, which has lead to the below unfair image of the world in terms of network connection density.


The second paradigm is the organization of access network of a national eyeball which is based on central planning. Indeed, network expansion and deployment is planned according to usage forecasts given by marketing studies, and it's decided centrally by the eyeball in a top-down approach. The main driver of building such kind of networks depends on whether it is a public or private service, and on the telecom regulation pressure, thus leading to more or less fair network coverage.

The main advantage of this model is a more rational and efficient use of resources (network assets, people) to satisfy the present and future needs of population. On the technical plan, network is more controlled and thus potentially providing better service, for example:

  • Traffic types (voice, download..) are differentiated and quality is manged from end to end.
  • Traffic routing is better controlled with any chosen protocol, where on internet only BGP can be used with its limitations.
  • Some specific techniques can be used to optimize the network usage, such as mutlicast for video streaming, where it's almost impossible on internet.
But on the other hand, planning cycles have important inertia and often can't cope with demand dynamics. Moreover, eyeballs are not leaders in terms of innovation, for example, the ongoing SDN/NFV revolution in networks is driven by software companies like Google.


The third paradigm is the FON model, based on participatory economy, i.e. network is crowd sourced by users themselves.



As explained in the above video, a Fonero (user participating to the FON network) shares its home WiFi connection with others in a secure way, and thus has access to others' WiFi anywhere anytime. By making use of the idle bandwidth on your internet box, you gain access to thousands of hotspots around the world for free. It's the same concept of P2P for downloading files on internet.

The main driver of such communities is making the world a better place in a bottom-up approach. Agility, innovation, open standards & free service are keywords in this model.

As a final word, I personally believe in a 4th paradigm which is a mix of the last two. I will try to develop it in a future post.





dimanche 4 septembre 2016

Ce qu'il cache ce burkini ...


Je vais m'en vouloir d'avoir participé à ce faux débat dans son timing, son ampleur et son champs d'analyse. Néanmoins, je vais en profiter pour revenir sur les notions théoriques de laïcité, liberté et d'égalité, pour ensuite commenter d'une façon plus pragmatique l'hypocrisie du débat ainsi que ses dessous politiques dangereux.


U


La meilleure définition de la laïcité que j'ai trouvée est chez Albert Jacquard: La laïcité consiste à prendre des décisions au nom des hommes, de leurs besoins et aspirations, non pas au nom d'une révélation externe telle une directive religieuse. Elle ne signifie pas la repression des religions comme on commence à le constater en France. Elle ne se limite pas non plus à la doctrine religieuse, mais toute doctrine qui nous parait aujourd'hui sacrée telle que le capitalisme avec ses valeurs de propriété privée et de compétition entre hommes dans une logique de marché. Dans ce sens, notre société occidentale est loin d'être laïque.

La laïcité reconnait ainsi l’individu comme citoyen libre participant à la société, à son sort, et en faisant partie intégrante. Elle reconnait son identité, et l’aide à la forger. Quand l'individu se voit privé d'une réelle participation à la société, sa relation avec elle devient conflictuelle et peut finir soit par une révolte consciente, soit par une vengeance contre soi ou par une violence envers elle. Notre société occidentale avec sa démocratie délégative illusoire et la concentration du pouvoir dans les mains d'une seule classe sociale exclue des catégories entières (pauvres, femmes, immigrants...) de cette participation. Le fanatisme dans le monde entier est en partie la manifestation de cette exclusion, que ça soit religieux comme Daesh, ou politique comme l'extreme droite. En effet, c'est une recherche d'identité qui peut devenir suicidaire comme dirait Amin Maalouf, surtout en l'absence de forces révolutionnaires aiguillant la colère.

Donc la citoyenneté nécessite comme condition la liberté de l’individu, mais encore faut-il définir cette liberté. La liberté d'un individu dans une société n'a pas de sens que si elle est associée à des contraintes. Cette dynamique de liberté-contrainte nourrit la participation du citoyen à sa "cité" et lui donne du contenu, mais à condition qu'elle soit égale pour tous les citoyens. L’équilibre auquel arrive cette dynamique résulte du dialogue continuel des individus avec la société, qui est au final le synonyme de leur participation. Cet équilibre n'est pas immuable, mais évolue avec l'évolution de la société. C'est dans cette optique que je voudrais considérer le burkini. La liberté de le porter est associée à la contrainte des autres de le voir, et donc la contrainte de voir les autres aussi  exercer leur liberté de s'habiller. Je pose ainsi les questions suivantes:
  • Est-ce que les femmes portant le burkini ont eu la possibilité de participer réellement à la société?
  • Est-ce que leur identité forgée est le résultat d'une libre participation? d'un dialogue avec la société?
  • Est-ce qu'on a considéré la dynamique liberté/contrainte de s'habiller en respectant le principe d'égalité?
Non à mes yeux. Personnellement je considère le burkini comme un syndrome de la domination masculine sur les femmes, mais je crois que le seul moyen d'en finir est d'inviter les personnes concernées à participer librement et d'une façon égale à la société. Toute attaque aux syndromes, sans considérer la maladie, ne ferait qu'aggraver la maladie.

Maintenant passons à l’aspect plus concret du débat burkini en France. Il est hypocrite parce qu'il se réclame défenseur des droits de la femme alors que c'est probablement le sujet le moins structurant dans cette thématique par rapport à d’autres sujets comme le salaire des femmes comparé à celui de l'homme, la reconnaissance de la grossesse comme travail... Il est aussi hypocrite parce qu'il se réclame défenseur de la laïcité, alors qu'on tire dessus à chaque décision prise au nom du peuple mais qui n’émane pas de celui-la, comme par exemple en utilisant l'article 49.3.

Enfin, le vrai enjeu derrière toute cette polémique est d'ordre politique afin de mobiliser les gens, de récupérer les voix et de détourner l’attention. Le danger de ce jeu c'est de pousser plus les gens dans le fanatisme et préparer les identités meurtrières de demain.




mercredi 27 avril 2016

Extra-hints for choosing your CDN provider

Why additional hints? because you can already find a lot of literature on the subject, for example this nice article by Jon Alexander the CDN product manager at Level 3. What I want to add here is some further questions based on my personal experience so you would avoid some surprises later on when you have your CDN contract signed . I'll also bring up the multi CDN question.

So as a number one rule, a map of CDN nodes is simply not enough for making a good decision! It's important to know more in depth these following elements:

  • The most important verification is checking with whom they are networked in your country of interest. For example in Egypt it's mandatory to have a good network capacity with Telecom Egypt who has the largest share of the local market.
  • How are they networked with the Eyeball? for example dedicate private connection is definitely better than IX public peering from performance perspective, but it is more expensive.
  • What is the CDN cluster capacity and its uplink bandwidth?
  • What is the architecture of the cluster? is it a bunch of small servers or a couple of high density servers? are they Squid based or using new beasts like Nginx or Varnish? Indeed, underlying caching engine has an impact on latency and cache efficiency.
  • Sometimes a CDN provider can outperform another even though they do not have a local node for all of the above reasons, so the best way to evaluate is to test!

Now that you know where your CDN provider is present, it is important to know what are its capabilities on these different locations. Is SSL available everywhere? the same question should be asked for any relevant feature for you. Indeed, CDN platform evolves with time through acquisitions, technological updates like any other platform, leading to a heterogeneous ecosystem. Thus you need to verify if your CDN properties will be bound on all locations that are brightly presented by sales :)

As I said previsouly, the best way is to test, but beware of testing biases! First, make sure that your CDN provider has provisioned a trial platform similar to the one you'll be using in your real environment. Then try to evaluate the CDN performance during peak hours, for example in the evening, and under load, because CDN behavior can vary under different conditions. You can use third party analytic tools such as Catchpoint & Dynatrace, but you should understand the limits of their evaluation, because their probes are located in datacenters so they don't reflect 100% the end user experience.

What also makes a good CDN solution is layer 8 in the OSI model: the people! Indeed, when things go wrong, or you want to make critical changes, you'd love to have quick and helpful support. Do you know where is the support team based? their working hours? to which organization they report (regional or global)? any SLA on ticketing? Try to test this as much as possible during your trial.

The last point I want to mention is related to commercial terms. Most of CDN providers have per GB traffic pricing. Make sure that you know what GB you are talking about before comparing prices! is it Egress only? Egress+Ingress? do they bill midgress traffic?
Give some consideration to the commercial flexibility of your provider: what kind of commitment you have? volume/financial/duration? do they offer flexible models that can adapt to your business fluctuations? you can evaluate that during the trial, for example whether the CDN provider is rigid on trial extension, billing start date, feature testing...

Finally I'd like to give you some thoughts on multi CDN strategy. Like any solution it makes sense in certain cases but does not in others. Check Cedexis website for example to learn more about the benefits of multi CDN. In the following, I'd like to highlight some of its limits:
  • Additional costs: Cost of CDN load balancer service, cost of higher CDN rates because of lower commits, cost of additional origin traffic, internal cost of multiple providers management.
  • Increased solution complexity, especially when you have advanced features that requires deeper integration with CDNs
  • Risk of SPOF within the CDN load balancer.
  • Less cache efficiency
  • Is the Load balancing algorithm suited to your business challenges?