Showing posts with label Scala. Show all posts
Showing posts with label Scala. Show all posts

Wednesday, March 11, 2020

Running a rich Microservice constellation in Kubernetes

Languages have trade-offs like any solution in tech. Microservices are great for many reasons, one of them allows the best tool for the job. Kubernetes allows us to have the same operational standards and procedures even using different languages and solutions. For this blog post, I want to show a simple project I build which is a constellation of services. There are 5 microservices, written in several languages like Scala, Java, Go, Python and Rust. The services do a pretty basic math operation like (+, - , / , *) and the Scala one does the aggregation and orchestrate the other services using a polish notation algorithm and REST calls.

The Video



Calc Services (5 Microservices Scala, Go, Rust, Python, and Java) running on K8s from Diego Pacheco on Vimeo.

The Source Code

The complete source code is here on my github.

Cheers,
Diego Pacheco

Wednesday, July 4, 2018

Having fun with Cats

Cats is a library for Functional Programming in Scala. The name come from the Inspiration by the Category Theory so Cats is short for Category Theory. Cats are about types, Type System and having abstractions to make it easy to work with them. If you want to go to the next level of function programming Cats is the way to go, however, I need to say that it's hard, get the ideas to take time, understand the terms, principles, and applications takes times as well. At the end of the day, this is about style so there is no right or wrong. Recently I made a post about Monads so it might be useful for you to read it if you did not read it yet. This post won't be monad heavy but reading about monads will make it easy to understand some things here. It's fine if you don't understand everything, I'm 100% sure I don't understand everything and there are lots of ideas and concepts that I still learning. So this takes time. I don't think this is something will make sense right of way or you will be able to apply the right of way unless you already studying this or is a Haskell Engineer. For me was fun to learn Cats(I'm still learning) but you might find not cool at all which is fine too. I work with multiples languages like Java, C, Scala, and Python. I have more fun with Scala rather than other languages, however, using Scala and Cats: The compiler errors and are the most complicated and hard to figure it out what I'm doing it wrong. Sometimes it feels like I'm trying to solve a puzzle that I have no idea what I'm doing. So I don't think math makes programming 100% more clean than OOP and sometimes to understand my uncle abstractions are faster, easier and less painful but they are not math proofed since is my uncle abstractions :-) The disclaimer is, it has some middle ground between theory and practical programming.

The Cats Structure

Cats always have this Structure:

  • A Type Class: Which is a trait in Scala
  • A Type Instance: Which is an Object in Scala
  • A Type Interface: Which is an Object in Scala
  • An Implicit Class: Which use an implicit class in Scala for better syntax
This way there is complete decoupling from the Alberic Type(Type Class) from Type Instance which is the binding for the Types you want to support and Type Interface which is the functions that will be generic for all your supported types. It's possible to add more Types to this system without having to touch previous classes that why type classes are much better for extension than OOP.  Let's take a look into a code sample to get this pattern right. 

Here we have all patterns together in the same code sample. The type class here is called Show, the idea of the show is to provide contract on how to Show some type, which might be printing to the console or logging into a file or send to a TCP port for the centralized log, it does not mapper function is decoupled from Type here. Them we have our Type Instances, there are 2 type instances: for Int and for String, this means we cant show Double for instance, If you want show Double you can provide the Type Implementation and that happens outside of this file so as you can see this is super extensive, correct and backed by the compiler. Them we can the Type interface which is the ShowInterface that for this case is how we will provide functionality for the Type. Finally, we have an implicit class in order to provide better syntactical sugar. 

A Bit of Theory and Definitions

Cats have Type Classes like SemiGroup, Functor, Monoid, Applicative, and many others and also has Data Types like Either, Eval, Ior, Id, Kleisli, State, and many many others. Data Types inherit from each other and this will the pileup functions. Pretty much all start with a SemiGroup which is just a Type that implements combine and it's capable to combine 2 Types of the Same Type. So Monoid is just a SemiGroup that provides the empty function. Cats provide type instances for most common and useful type in Scala such as Lists, Maps, Option, Either, Try, Future and many many others. So you don't need to define the algebra and functions for these types since Cats provide that for you out of the box, however, you might need provide evidence(implementation) for your Own business types. Functor is something that provides map implementation and is great to apply effects to data. There are type classes instances for Functors, Monoids, and Monads from the same common types I mentioned before. Type Classes have laws which need to be true always, Laws are something that gives your guarantee of composability however they don't help you to understand what the type is or what the type is doing. Some types in Cats are very straightforward and easy to understand like SemiGroup and other are very complicated like Cons.

Scala and Cats

Let's see how easy(for this samples :-)) is for you to use Cats with Scala.

Monoid Sample - Here we are using Option and monoid and combining 2 options.


Either Sample - Here we are using Int as Either with nice syntax and combining them with For Compreenshions in Scala.


Functor Sample - Here we have Functor Lifting where we have a normal function and turn into a Functor without having to implement type algebra.



That's it - I hope you had fun and that trigger your curiosity to Learn more about Cats and Function Programming Patterns, Math and Decoupling. I'm not an FP Zealot, however, I see the benefit in Theory so I search found the balance between Theory and practical programming. If you might wondering, does anyone use this kind of ideas in production at Scala? Yes, Twitter, Yelp, Verizon and many other big companies, These ideas used in Spark and Big Data for instance as well.

Cheers,
Diego Pacheco

Wednesday, June 27, 2018

Scala Monads 101

What is a monad? It's easy to read lots of sites and still don't get the idea. Many people already used monads but few really understand them truly. Monads are a kind of a Myth in Scala for lots of developers. Itś possible to code without creating your own monads? Yes for sure it is. It's very likely you already create abstractions which were similar to monads but with different names. Here are some samples of Monads in Scala: Try, Either and Option. We have some of this monads in Java 8 as well like Optional. Monads come from the Category Theory which is a branch from Math. Monads are very strong concepts in Haskell Language for instance. IMHO one of the reasons why use Monads rather than your uncle abstraction is that they are universal and in theory, they are easier to understand, in practice if you don't study math, Category Theory and Functional Programming principles actually monads will sound much harder than you user uncle abstractions. So, in other words, we are talking about some common universal standard(Math) versus custom-made abstractions with you will need to learn again all the times, in comparison with Monads, where it could be very hard to digest but once you learned is the same thing always, in theory, :D

Why Monads are Hard?

IMHO it's because is all about Math and the fact that historically Functional Programming started at the Academia and there is much elder FP material on the internet rather than now when I say now, I mean that now we live in a post-functional era where things are more practical.  Monads are very similar to Functors since both are Wrappers. Wrapper to me is the best definition of a monad/functor. Imagine you have some data and you want to wrap that data around with some Context so you can provide extra meaning or extra functionality, in order words understanding and/or functions.

Going back to the math, sorry, just a little bit more... Functor means that you apply a function(called map) to a wrapped value. What's the difference to a Monad? Well, monad you use flatMap :D So It's possible to map a function with another function, this means 2 things, first of all, that we are doing function composition and second that functions are functors too. So we can end up with functions and values wrapped by context. In order to get a value wrapped in context and apply into a function wrapped in Context, we need an Applicative which has the function apply. Monads apply a function(which return wrapped value - also know as flatMap) to a wrapped Value. So monads are about flatMap. So Applicative and monads are the same things? Similar but Monads are more powerful since you can get the result from one computation and pass to another one.

Let's t take a look at how to define a monad type in Scala.

Here we have 2 functions: pure and flatMap. As you can see monad takes type parameters so we can specify proper types when we do concrete implementations. Concrete implementations are very simple and easy to do.  The function pure also know as apply or unit means that I will pass the raw value of A and A will be wrapped by a Monad called Monad[A]. The other function flatMap does the same but the difference is that the value comes from a function that wraps that value rather than the pure value. That's why is called flatMap because the function knows how to transform/map the computation to the wrapped value.

I won't cover monadic laws here in order to not make this more complicated than already is but if you curious about it there is the great post about Monadic Laws in Scala. Besides implementing pure and flatMap a monad has 3 properties that need to be always true, that's are the monadic laws.

Let's see a Theorycal application of monads before we see a practical one, let's say when to want to wrap numbers with classifications like Even or Odd. We can write simple monad to do that, let's take a look on the code.



Here we have a Monad trait with 3 monads implementations, where we have a companion object that creates the proper monad based on the fact that the number % 2 ==0 is even otherwise is odd.

Practical Usage in Real Life

Future and Option are great samples of real-life usage of monads. You can create your own monads as you find need to provide proper abstractions and functionally. Scala has ScalaZ which is a library who come full fledge with Functional Programming and Category Theory principles. List and Set are monads too in Scala. It's very likely at this point your convinced that monads are real and you already used them a lot.

Let's take a look on a practical real-world example.

Here is this sample you can see we have 3 services, we have a service to book plane tickets, we have a service to rent a car and we have a service to book a hotel. We are doing a big function composition using flatMap so the computation of one stream goes to the other stream. In order to have the code in a more clear code, we use for comprehensions in Scala.  This is nice because this code is verified by the compiler and is all async but is async in a safe way which means verified by the compiler.

I hope this helps you out to digest monads and find the middle ground between mathematical theory and practical system development. Needless to say that these concepts are getting bigger than Scala and Haskell since pretty much every single language today is having some Functional influence now.

PS: There is an interesting fact about the image I used for the cover of this blog post, that image is the graphical representation of a monad following category theory.

Cheers,
Diego Pacheco

Tuesday, June 26, 2018

Experiences Building & Running a Stress Test / Chaos Platform

Stress Test is something that everyone should be doing if you care about performance. I run software at AWS so is easy to spin up more boxes but is also more expensive and you might be in a trap because you might have ineffective software with Memory Leaks, Hidden Bugs, and untuned servers which will all show up with scale or stress tests - whatever happens first. Unfortunately, this still not quite popular among most of the microservices developers. Create and run stress tests is something hard and tedious because it involves lots of moving parts. A single microservice could call several other microservices some of the downstream dependency graphs could get complex easily, this is just one of the many challenges involved with stress tests. Today I want to share some experiences in building and running a stress test and chaos platform. Do you know how many concurrent users or requests per second(RPS) your microservices can handle? Do you know what it will be the COST to scale your services? Do you know if you can SCALE your services in a COST-EFFECTIVE way? All these questions can be answered with proper Stress Tests.


Why Build a Stress Test / Chaos Platform

I run Stress Tests with Gatling. Gatling is a great tool, not only because you write your tests in Scala(Not XML or some complex UI like JMeter). Gatling Scala DSL is very simple and effective to test microservices via HTTP / REST APIs. So why build a platform? Why Running local is not enough? First of all running local is fine in order to get the Stress Test Scenarios right but as a baseline is completely wrong since you don't have same hardware/infrastructure as production.  Secondly, there are several other questions we need to answer in order to know whats going on - these questions are impossible to be answered locally:

  • Do you know what are the previous results? You are faster, slower? 
  • Did your service has more or less latency? What was before?
  • Did you increase or decrease the RPS(Request Per Second)? 
  • What about resource usage? Are you CPU, Memory, IO or Network Intensive? 
  • What about Chaos? If a downstream dependency dies what happens to your service? 
  • Do you test all that manual or it's automated?

I always remember Google SRE book when they say "Hope is not a Strategy". Most of the developer doesn't care and/or don't have the tools to answer this questions in a productive way. To be able to answer this questions we need to have the following capabilities:

  • Set up and run Stress Tests
  • Collected and Store data for further comparison(baseline)
  • Analyze results
  • Isolation: Make sure different people don't mess each other tests

The Last Capability is complicated since, in theory, it would be possible to have a dedicated production environment per developer however in practice this is not COST-EFFECTIVE. So a sort of SHARED environment is needed. Sharting an environment requires some kind of scheduling/placement so we only run stress tests that don't use same downstream dependencies in order to make the tests fully isolated.

What about Chaos Engineering. My current project runs on AWS using NetflixOSS Stack. I run Chaos Tests with Chaos Monkey from Simian Army. How do we check when a downstream dependency goes down that a service call fallback to other AZ or to Static Fallback and recover from that failure in an automated way? The trick here was to use Stress Tests for Chaos verification. So basically a stress tests in running while chaos testing is running. This way we re-use same Stress Tests for Chaos and we don't need to write down 2 verifications.

What we Build / How it works?

There are 3 Phases in order to do the Whole Stress Test / Chaos Process. Phase 1 Plan and Code, Phase 2 Execution, Phase 3 Analyses. Let's get started with Phase 1.

Phase 1 - Plan and Code



Phase 1(Plan and code) you need to think about your Exceptions in sense of failure. This is needed if you are running a Chaos Test, If you are just running a Stress Test you don't need to worry about this. So for the Chaos Test, we need to think about what should happen for each failure scenario and how the code/service should recover from that scenario. Once you have that in mind you can write down your Gatling script. Is possible to write down assertions in Gatling/Scala in order to check if your assumptweres was correct.  You will write down Scala code and you might test it locally just to make it sure the code is correct. Them you can move one and push this code to github. When often create a project to make it easy and often folks have multi scenarios of chaos/stress so a project is handy. So this project is pushed to github.  Phase 2 now is Execution. Let's explain what will happen with the Stress Test / Chaos code.

Phase 2 - Execution


The Stress Test / Chaos Platform runs in Jenkins(So we did not have to build a new UI). However we just used Jenkins as UI, the Gatling machine runs as separated AMI, so if Jenkins fails we don't lose the tests doing this way we make sure the Stress Test and Chaos Platform is Reliable. Because Jenkins is not reliable.

The Jenkins job is pretty simple and receives some parameters like the github project url, the scenario which the developer wants to run, the number of users(Threads) the test run should, The duration of the test and if we should apply chaos or not, If yes for what IP. This information sends from Jenkins to the Scala code this is done this way so we can re-run the same tests multiple times increasing the load or the duration which is quite handy. This is the main Jenkins job but there is a second job which we call it Profile, so this job will run you stress test multiple times which different users(threads) which we call rounds, so first with 1 user, them with 10 users, them 100, them 1k, 5k, 10k and so and on, these rounds come by parameter so you can tell what the sequence you want. Why do we do this? We do this because them we have an automated way to know when the service breaks so we can know, how many users the service can handle and what is the latency as we increase users.

Continuing with the flow, after the user triggers the Jenkins job, the stress test code project will be cloned from Github and we will spin up a new Gatling AMI them the Target microservice will be stressed out during the period you specify. Every single machine we have has observability so we send metrics via collected to SignalFX. When we are calling a target microservice that microservice might call other microservice and that microservice another one and so on and on. Most of our services use Cassandra as Source of Truth and others use Dynomite as the source of truth other might just use Dynomite as the cache. During the test, if you are running a chaos scenario a new AMI with Simian Army will pop up and will kill some of your downstream dependencies. When the test is finished all me Gatling metrics will be sent to SifnalFX and zipped and send to S3. We use Jasper Reports to generate PDF reports with all metrics so the developer can have a nice PDF do the analyses of that test. We also use D3 + Puppeteer in order to render Gatling reports into images to shave it in Jasper, so part of the platform has some nodejs + javascript code. Most of the code is Scala. So If developers want to do comparisons they can go to SignalFX and get all historical results, we have custom dashboard there.

Phase 3 - analysis

Since all data is on SignalFX is pretty easy to co-related Cassandra Data with Gatling Data and pretty much all other information we have like hystrix metrics, OS level metrics and so on and on. Right now we are doing comparisons anyway. But in the future, this will be done by a service with will consider this in to the Automated Canary Score so if you degraed the performance, increase the latency your deploy will fail. Often the commons problems/Bottlenecks are file descriptors not being tunned in Linux, Connection Pools, Threads configurations, Lack of Cache, Too much logging. 

Lessons Learned

Build a Stress Test / Chaos Platform is not that hard, have developers to use it with discipline is the hard part. There are interesting challengings here don't get me wrong but some evangelism and support is the core port I would say. Stress Tests / Chaos need o be on the Definition of Done or Production Ready checklist of the microservices teams otherwise people might not use as much as they should.

Another big learning for me was the fact that not every developer like to do Stress Test / Chaos engineering, I worked with engineers who love it and others who hate it so cultural FIT is something I care a lot nowadays and make sure whoever works on my team want to do the kind of work I'm doing and care about DevOps.

Cheers,
Diego Pacheco

Wednesday, December 6, 2017

Reactive Programming with Akka Streams

Akka-Streams is one of the many interesting and very useful Akka modules. Akka is a powerful actor / reactive framework for the JVM. Akka is an extremely high-performance library -- you can do Up to 50 million msg/sec on a single machine. Small memory footprint; ~2.5 million actors per GB of the heap. Akka is also resilient by Design and follows the principles of the Reactive Manifesto. Akka follows Erlang Actor philosophy/ideas.  Many big and successful companies use Akka in production like Blizzard, Intel, Wallmart, Paypal, Amazon, Zalando, Netflix, IGN, VMware UBS, and much many more others.



Actors and Protocols

Actors are a nice abstraction and programming model however they are not ideal for every single kind of problem. Some problems fit very well with actors others don't. When you are using Akka you are pretty much defining a protocol like a state machine. You will define series of messages that your actors with a share in order to do your work.  However, working with Akka has some pitfalls like you need to model your problem with actors and depending on what you do you could lose messages and some transformations might be harder to do it -- actors could be used to deal with streams work however would be less productive and less natural so that's where Akka stream come to play.

Akka Streams

Akka Streams is an Akka module which will use Akka under the hood. Akka Streams has the materialization concept where you will be able to convert a Stream(sequences of Source -> Sink and Graph work) into a set of actors into an actor-system, Akka streams with use Akka to run your computation work.


You can think about Akka Streams as a big pipeline of operations. Where you will have a SourceShape[T+] which pretty much is a publisher and will provide data. This Source will publish data to a subscriber FlowShape[-I,T+] then you can apply transformations or even continue to pipe more things.  Akka stream is nice because you can have functional programming similar to apache spark for instance but truly reactive and low latency. Akka Streams can also handle slow consumers / fast producers applying back pressure.

Show me the Code

Sometimes the best way to see something is going down to the code. So Let's see some code samples. I will show some nice things you can do with Akka Streams and then you can take your own conclusions.


Here you can get the full project code with sbt just check it out on my GitHub.

Cheers,
Diego Pacheco




Monday, November 20, 2017

Running Multi-Nodes Akka Cluster on Docker

Akka a great actor framework for the JVM.  Akka is high performance with high throughput and low latency, resilient by Design.  It's possible to use Akka with Java however I always used and recommend you use with Scala.

Create actors systems is pretty easy however creating multiple nodes on Akka cluster locally could be boring and error-prone. So I want to show how easy is to create an Akka cluster using Scala and Docker(Engine & Docker-Compose). We will create a very simple actor system, we will just log events on the cluster however you can use this as a base to do more complex code.


Build.sbt

In this project, we will use Scala 2.12.3, Akka 2.5.6. Here we are defining these versions and also the docker configuration so we will generate a Dockerfile based on this project.



We will be using SBT 0.13.8 -- You can define SBT on the file project/build.properties

project/build.properties

sbt.version=0.13.8

We also need to add the docker plugin on SBT. We do it by editing the file project/plugins.sbt.

plugins.sbt

addSbtPlugin("com.typesafe.sbt" % "sbt-native-packager" % "1.0.3")

OK. Now we can go to the code. Let's create a Main.scala at src/main/scala.

Main.scala



Here we have an ActorSystem and we are registering a ClusterListener which is an actor that will be notified about all cluster events and when this actor boot up he will register itself on the cluster. This registration is defined on the preStart function.

reference.conf



Here we define the cluster configurations for Akka cluster. We defined the actor provider to be akka.cluster.ClusterActorRefProvider. One big important thing here is the seeds-node these nodes need to be up in order to other nodes join the cluster. You don't need to have all your nodes as seeds usually folks add 2-3 nodes as seeds. So seeds nodes address is: akka.tcp://default@akkaseed:2552 So we need to have a node running on port 2552.

docker-compose.yml



Finally, we have the docker-compose configuration. There are only 2 containers here. Seed and node. So this will create a cluster with 2 nodes. However, we will be able to scale the node image to N nodes.

Building & Running

Now is the fun part -).  Open Linux the terminal, we need to build the Docker image and then we can run docker compose. So letś do it:

$ sbt clean compile docker:publishLocal
$ docker-compose up

That's it we have the cluster Up and running.

Akka 2 Node Cluster Running - docker-compose up

Scaling Up the Cluster

We can scale the cluster to N nodes - So right now we only have 2 nodes, Let's scala the cluster to 10 nodes. We do it with docker compose by $ docker-compose scale node=10 That's it you should have something like this.

Scaling up the cluster - docker-compose scale node=10

Akka Cluster Logs - After Scale Up to 10 nodes

This is great for development and debugging. For production, I recommend using Kubernetes. You can get all the source code on my GitHub here.

Cheers,
Diego Pacheco

Sunday, November 19, 2017

The power of Scala Type Classes

Scala language it was very inspired by Haskell. Type classes in another sample of this and many inspirations from Haskell.

There are always several ways and styles of thinking to address problems. Type classes is another way of thinking and dealing with problems. For some problems, you might just use Pattern Matcher -- But if you have to add a type? Right, I could stick with good old OO since Scala is hybrid right? --  But what if you need to add one operator? Type Classes can help you out.

What Are Type Classes?

In order to have a type class you need to have:
1. A signature
2. All implementations for Supported Types(You can add more types later)
3. A function that requires a Type Classe

What are the Use Cases for Type Classes?

Type classes are great because of the make behavior more explicit and extendable. This "Behavior" more explicit happens because this will be checked at Compile Time.  So you can extend and add more Behavior without re-compiling or be changing your previous code. That's why type classes are powerful.

What are the practical Use cases?

1. Business Rules / Domain Objects: Where you need to express different and customizable calculations. Doing this changes without breaking the client.
2. Pipelining: When you need to apply several different rules or a request.
3. Serialization/Deserialization: When you need express different parsing logic.

Sample Code

Now let's see some sample code. First code we will define some simple Math operations and we will use Implicits to apply automatic conversion for use.



So here we have a Trait called NumberLike witch is expecting a Class called T. Which could be any numeric type. Them we can a companion object with added support for 2 types Int and Double. This support is added via implicit objects called NumberLikeInt and NumberLikeDouble.

Finally, on the same object, we have 2 implicit conversions one to convert an Int into a NumberLikeInt and another to convert Doubles in NumberLikeDouble. Once we did the import on the main method we can use regular Int and Double add call our talk math methods.

The second sample is about behavior with different objects and the ability to add things later. So let's take a look at the following code.



Here we have 2 case classes: Person and Cat. We also have a Trait Can talk which expecting a Type T where T can talk. On AddOns companion object we have some default behavior implementations like PersonTalker and CatTalker witch allow Person and Cat to talk.

We also have an implicit class wrapper which gets a Type A and parameter X. This class provides the talk method where the CanTalk[A] will be implicit and will delegate to method talk passing the X argument.

Right after doing the write import we can call talk method on case class Person and Cat. We also can add new types so I defined a Dog after the fact.

Scala Type classes are really powerful and cool it allows to do really modular design and extend the design in a very clean and concise way. You can download the full source code and SBT project on my github here.

Cheers,
Diego Pacheco

Saturday, October 22, 2016

My first book: Building Applications with Scala

Dear reader, you might miss my post on the last 7 months. I was not posting with the same frequency I was doing before. The reason why it happened is because I was written a book about Scala.

I'm here to announce my very first book about building Scala application. The book goes for the basic principle of functional and reactive programming to the most advance scala language ecosystem solutions such as Play Framework, Akka Framework, RxScala and much more.  I hope you enjoy the book and hope this book could help you in your career building scalable and modern scale applications.

You can buy the book here: https://www.amazon.com/Building-applications-Scala-Diego-Pacheco/dp/178646148X

Cheers,
Diego Pacheco

Sunday, September 6, 2015

Microservices Contracts: REST vs Native

This is not a new discussion is very old. I remember back into the SOA days - always was something like REST VS SOAP. SOAP is dead and REST own but now is something different.

To be clear - I think REST is the only thing you should use when we talk about External consumers. Like and external API thats would be accessed by a mobile or website.

There are other scenarios like INTERNAL consumers. For this case you can consider using a NATIVE driver instead of REST. So whats is a NATIVE driver? Is just code into a language you are using - so could be Java, Scala, Clojure, Ruby, Go, whatever it does not matter. This idea is similar to the JDBC java driver you have java code to access the database.  So thats the real question when i have internal consumers for my microservices, should they talk to each other tought REST or native driver?

It really depends because both approach have PROS and CONS like everything in life - there are trade offs - but i consider REST in general a more same approach but lately some there are some tech evolution that are changing this game.

REST: The Good Parts

The biggest advantage of REST is interoperability - So you can code one microservice in java and the other one in C for instance and they will be able to talk with other by default. Second great advantage is the unified interface is always same way you dont have that with native clients or soap. Another great quality is abstraction because you dont have any dependency just HTTP and you are free to consume it using any library you want.

Native Drivers: The Good Parts

It`s way more simple to make a java or scala call rather than an HTTP call. Second you can have great advantage in sense of performance and being reactive. You can use RXJava or ReactiveStreams or Akka and gain lots of benefits beyond better performance and non-blocking io. Today you can even not do a remote call instead you can do a IPC call.

Both Have issues

REST for instance is more work for you because you will need to have a mapper to map exceptions to error codes, you will need to have classes to represent the objects you call and also will need make the calls to the service. Besides that you will need to make this for each service - thats the price for flexibility and interoperability. You still could have worst performance because its another layer of remote call and potentially blocking io.

Native drivers have problems as well - they are not a free lunch at all - first and biggest problem is coupling. Because you are calling CODE. Second you could potentially be into a Dependency HELL. Lets say you write down the native driver in scala, all libs you use in your driver will be loaded to your classpath and you might have conflicts like google guava or log4j or xml libraries and you will be tight to that libraries and versions and thats bad because it maybe make hard to you update libraries.

In the end of the day...

REST is best for independence but you can leverage performance from native drivers. You really need think case by case and see what makes more sense for you scenarios and even consider mixed approaches witch is totally fine as well.

Cheers,
Diego Pacheco

Sunday, August 30, 2015

Microservices and Twitter Finagle

Lots of people talk about microservices with REST and thats makes lots of sense because if we think about SOA and Service Orientation Principles we should have interoperability and Rest is great for that.

Such task can be done be a edge service or proxy or api gateway so you really dont need use REST all the time internally.

There is also 2 other important matters like performance for instance, use can use IPC and its way faster than HTTP calls. Second is do something in a JVM language like scala or java - you have benefits doing that. For instance you endup relying more in your compiler and this makes less tests needed and make ir easier to consume services.


Finagle(https://twitter.github.io/finagle/) is used by twitter so if you should dont have doubts about scalability(https://blog.twitter.com/2011/finagle-a-protocol-agnostic-rpc-system). Finagle is built on top of Netty(https://blog.twitter.com/2014/netty-at-twitter-with-finagle). Using it you will give you lots of benefits. Here are some:
  * Connection Pools with throtling
  * Failure Detecture
  * Load Balancers
  * Back Pressure
  * Statistics, Logs and Reports

Finagle is built in Scala and has a very simple and awesome syntax. Finagle uses Futures to perform work. Futures are useful for this kinda of scenarios:
  * Long Cimputations
  * Network calls
  * Reading from disk

All that comes with best of functional programing thanks to scala :-) So you can do all sorts of Higher Order Function and Function First Compositions with map, fold, filter, flatMap, etc... Feature composition is simple and very powerful - I'm a microservice context you deftly need something to deal with composition-calls. You can learn more checking this out(http://monkey.org/~marius/talks/twittersystems/#1).

Finagle has 3 basic ideas: Server, Clients and Proxys. Server can be HTTP rest service with business logic exposed. Clients are libraries that we use the call the Server and proxys are used for all sorts of things - they can act as edge services. Doing all sort of cross cutting concers like: Logging, Security, API Gateway, Redirects, Canary Releases and much more.

Server


The server just have a aplly function and as you can see is very simple and yes there is a Future.

Client


This client code is easy as the Server - Yes Future again - so easy :-)

Proxy


Thats one of the most simple proxys i ever saw.

You can checkout the complete code on my github https://github.com/diegopacheco/Diego-Pacheco-Sandbox/tree/master/scripts/scala/twitter-finagle-playground-fun

Cheers,
Diego Pacheco

Sunday, June 21, 2015

Setting up a full JVM Env with Vagrant: Java 8, Scala 2.11, Groovy 2.4, Clojure 1.6, Maven 3.3, Gradle 2.4, Sbt 0.13 and Lein

On the previous posts i show how to use docker, docker-compose and nodejs with vagrant now let`s do a full jvm env :-)

This setup is pretty easy and straightforward for you however it could take some time depending of your internet bandwidth because we gonna download several components.

Remember you need to have Vagrant installed already, this time we gonna need more than just the Vagrant file because java, scala, clojure for instance require we setup environment variables so we gonna need change some files for clojure and sbt  in order to make everything work smoothly.

What Languages will be installed?
What Build Systems will be installed?

You will need to use this vagrantfile, here i do all the provisioning necessary to have this components installed and proper configured ready to you use and have fun.


As i said you will need custom bashrc file and also a script to run clojure all this file you can get on my github here: https://github.com/diegopacheco/Diego-Pacheco-Sandbox/tree/master/DevOps/vagrant-jvm look for the conf directory.

Now we are ready to run $ vagrant init && vagrant ssh 
Vagrant will download and provision all installations we need, as i said this cloud take some time.

Cheers,
Diego Pacheco

Thursday, May 7, 2015

Core FP Concepts slidecast

Functional programing is growing a lot now a days, not only pure function programing languages but several new languages are hybrid or at lest had fp influences like java, javascript, c#, python. 

On this slidecast i will talk about some of the core concepts on functional programing, here you want find all concepts but this could be a starting point for you.

I hope you have fun :-)





Video: https://vimeo.com/127154380



Slides: http://pt.slideshare.net/diego.pacheco/conceitos-funcionais


Chuyên mục văn hoá giải trí của VnExpress

.

© 2017 www.blogthuthuatwin10.com

Tầng 5, Tòa nhà FPT Cầu Giấy, phố Duy Tân, Phường Dịch Vọng Hậu, Quận Cầu Giấy, Hà Nội
Email: nguyenanhtuan2401@gmail.com
Điện thoại: 0908 562 750 ext 4548; Liên hệ quảng cáo: 4567.