Blog

Playing with Erlang Concurrency

In this blog post, I want to show what I have been learning about Erlang concurrency. Let’s start first with a definition of Erlang taken from Erlang’s own page:

Erlang is a general-purpose programming language and runtime environment. Erlang has built-in support for concurrency, distribution and fault tolerance.”

In other words, Erlang is a functional programming language that uses pattern matching to bind variables to values. While this might sound rather unremarkable, Erlang’s powerful toolset has merited its use by several notable companies such as Amazon, WhatsApp, Facebook and Ericsson.

In this post, we will be focusing on concurrency to demonstrate how Erlang controls multiprocessing. Let’s dive in and take a look at some examples.

To follow along, you will need to install Erlang and clone the Github repository

There  are 5 files we’ll be focusing on:

  1. single_process, multi_process, multi_process_worker and a book file, which will be used in the first example, “Count Words.”
  2. hard_task, which is used in the second example of the same name.

Let’s start with the “Count Words” example.

 

Count Words Example

To show you the efficiency with which Erlang manages process distribution, we will compare the time required to perform a counting task using just one process vs. multiple processes.

Count_words is a program that reads the words of a book and returns the number of times each word appears. The size of the book we are going to read is 32.4 MB.

The single_process module will read the book, capitalize all the letters of each word, and record the number of times each word appears. All this will be executed in a single process.

Let’s go to the console and run this example:

  1. Execute erl
  2. Compile the module: c(single_process).
  3. Execute: single_process:count_words().
  4. Note the time elapsed during the execution.
3> c(single_process).
{ok,single_process}
4> single_process:count_words().
elapsed time: 71.117 seconds
#{<<"DELIGHTED.\n">> => 5,<<"LIMIT.\n">> => 20,
  <<"VULVO-VAGINITIS,">> => 5,<<"TRUCK\n">> => 5,
  <<"EH">> => 5,<<"SPADEFULS">> => 10,<<"RINGING\n">> => 10,
  <<"HOMESTEAD">> => 110,<<"FEVERISH,\n">> => 10,
  <<"POET,\n">> => 5,<<"COUNTING">> => 70,<<"FEW--WAS">> => 5,
  <<"SIGHT-SEERS">> => 10,<<"COST;">> => 5,
  <<"CAPABLE\n">> => 15,<<"PEOPLE">> => 2850,
  <<"RESTED.\n">> => 5,<<"105,708,771">> => 5,
  <<"TREATED">> => 405,<<"DETECTED.">> => 15,
  <<"DISHONORABLE?">> => 10,<<"CHOOSING.\n">> => 5,
  <<"PAPER)">> => 5,<<"PETRIFIED">> => 10,<<"431,866">> => 5,
  <<"FIBRIN">> => 25,<<"APPOINT,">> => 5,
  <<"MERCHANT.">> => 10,<<"QUIT\n">> => 15,...}

It took 71.117 seconds using a single process.

I am using a Macbook Pro with an Intel Core i7 2.2 GHz processor. Processing times may vary depending on the properties of your computer.

Now let’s see how quickly this task is completed using multiple processes.

  1. Execute: erl
  2. Compile the sender: c(multi_process).
  3. Compile the worker: c(multi_process_worker).

The multi_process module is the sender in charge of:

  • Generating n number of processes.
  • Opening the book.
  • Reading the book line by line.
  • Receiving the responses from the process.
  • Merging the responses sent by the processes.
  • Returning the result.
% This function reads the file line by line and sends each line to the next worker process. It takes % 3 parameters: the ‘device’ in order to read the file, the ‘list’ with all the worker PIDs, and a
% ‘counter,’  which is needed to dispatch the line to the next worker.
read_lines(Device, AllWorkers, Counter) ->
     % We are going to read the file line by line.
  case file:read_line(Device) of
    {ok, Line} ->
      % Getting the next worker process.
      WorkerPid = next_worker_pid(AllWorkers, Counter),

      % Sending the line to the next worker.
      WorkerPid ! {self(), Line},

      % Reading a new line and updating the counter.
      read_lines(Device, AllWorkers, Counter + 1);
    eof ->
      % Eof is returned when the text is finished. When this occurs, we let all the workers know
      % that the file is finished, so we are going to wait for the responses.
      lists:foreach(fun(Pid) ->
        Pid ! {self(), eof}
      end, AllWorkers),

      % Wait for the worker responses.
      waiting_response(#{}, length(AllWorkers))
  end.

This function reads the file line by line and sends each line to the next worker process. It needs 3 parameters: the ‘device’ in order to read the file, the ‘list’ with all the worker PIDs, and a ‘counter,’ which is needed to dispatch the line to the next worker.

The multi_process_worker module is in charge of:

  • Receiving a line.
  • Capitalizing all the letters of every word on each line.
  • Adding the words to a map.
  • Counting the number of times each words appears in the book.
% This function is waiting for lines to process.
loop(MapAcc) ->
  receive
    {CallerPID, eof} ->
      % If we receive an ‘eof’ message, that means we have to return the resulting map to the caller process.
      CallerPID ! {self(), MapAcc},
      ok;
    {_CallerPID, Line} ->
      % Split the line into words.
      ListOfWords = re:split(Line," "),
      % Apply the ‘add_to_map/2’ function to each word and add it to the accumulated map.
      % This will return a new map with the new words added.
      NewMapAcc = lists:foldl(fun add_to_map/2, MapAcc, ListOfWords),

      % Wait for more lines with the updated accumulated map.
      loop(NewMapAcc)
    end.

The loop function will be waiting to receive two types of messages: a line to extract words from and count the number of times each word appears, and the ‘end of line’ message to return the map created with the word list.

Now let’s see how quickly this task can be accomplished by running multiple processes at the same time:

  1. Execute: multi_process:count_words().
  2. Note the time elapsed during the execution.
5> c(multi_process).
{ok,multi_process}
6> c(multi_process_worker).
{ok,multi_process_worker}
7> multi_process:count_words().
elapsed time: 9.707 seconds
#{<<"DELIGHTED.\n">> => 5,<<"LIMIT.\n">> => 20,
  <<"VULVO-VAGINITIS,">> => 5,<<"TRUCK\n">> => 5,
  <<"EH">> => 5,<<"SPADEFULS">> => 10,<<"RINGING\n">> => 10,
  <<"HOMESTEAD">> => 110,<<"FEVERISH,\n">> => 10,
  <<"POET,\n">> => 5,<<"COUNTING">> => 70,<<"FEW--WAS">> => 5,
  <<"SIGHT-SEERS">> => 10,<<"COST;">> => 5,
  <<"CAPABLE\n">> => 15,<<"PEOPLE">> => 2850,
  <<"RESTED.\n">> => 5,<<"105,708,771">> => 5,
  <<"TREATED">> => 405,<<"DETECTED.">> => 15,
  <<"DISHONORABLE?">> => 10,<<"CHOOSING.\n">> => 5,
  <<"PAPER)">> => 5,<<"PETRIFIED">> => 10,<<"431,866">> => 5,
  <<"FIBRIN">> => 25,<<"APPOINT,">> => 5,
  <<"MERCHANT.">> => 10,<<"QUIT\n">> => 15,...}
8>

Using multiple processes, it only took 9.707 seconds to execute this task, versus 71.117 with a single process. What is happening with multi_process is that the sender creates the instances of the nine workers and then sends a line to each one; when the sender reaches the last worker of the list, then the process starts again with the first element, circling through the list.

None of the workers ‘die’ until they receive the ‘end of line’ message from the sender. When this happens, the last worker sends the final map with each word and value back to the sender and then finishes executing.

Once the sender has all the responses from the workers, it combines them into a list and adds up the number of times each word appears.

It may seem like having 9 processes running simultaneously is not a massive concurrent event; it’s fast, but you may be thinking that this same task can be carried out by another programming language. This, of course, is true, but Erlang’s response time and efficient work distribution would be hard to match. If you need to work with concurrency, you should try Erlang.

Internally, Erlang handles concurrency by creating small, lightweight executions called ‘processes.’ Unlike other major languages such as C, these processes are not built on top of the native operating system processes or the thread model, but rather created and managed internally by the Erlang virtual machine. This allows each process to be significantly more lightweight in terms of memory and CPU requirements than a native OS thread. Furthermore, because Erlang operates by creating and using many of these small processes automatically, it is completely possible (and common) to run many thousands or even millions of processes with a relatively simple program.

Let’s see another example, for the love of Erlang 🙂

 

Hard Task Example

Imagine that we have to execute a task 10 times, and each time takes a second. If we execute it in a single process, it will take at least 10 seconds, but if we distribute it into 10 processes, it will take ~ 1 second in the best case. Let’s see how we could approach this problem with Erlang.

The project hard_task creates n number of process, puts each process in sleep mode for 1 second, and responds to the server when it has finished executing.

% This function will execute ‘hard_task’ the number of times provided by the ‘TaskTimes’ variable. % It creates a  new process for every task.
execute(TaskTimes) ->
  % Get the current time in milliseconds.
  Beginning = get_timestamp(),

  % Store the self PID in order to send it to the spawn processes.
  SelfPID = self(),

  % Spawn ‘TaskTimes’ processes and execute the ‘hard_task’ function.
  [spawn(fun() ->
           hard_task(SelfPID)
         end) || _X <- lists:seq(1, TaskTimes)],

  % Now we wait for the spawn processes to send back a message saying they are finished.
  ok = waiting_response(TaskTimes),

  % Calculate the elapsed time.
  ElapsedTime = (get_timestamp() - Beginning) / 1000,
  io:format("elapsed time: ~p seconds ~n", [ElapsedTime]),
  ok.
% This function is the "Hard Task"; it will wait for 1 second and then reply with a ‘done’ message
% to the CallerPID.
hard_task(CallerPID) ->
  timer:sleep(1000), % Let's simulate a super time-consuming hard task!
  CallerPID ! done. % Send the message ‘done’ to the caller process.

 

Let’s run it:

  1. Compile: c(hard_task).
  2. To start with a small number, we’ll execute: server:init(10);

It took 1.001 seconds.

8> c(hard_task).
{ok,hard_task}
9> hard_task:execute(10).
elapsed time: 1.001 seconds
ok
  1. Execute: server:init(100);

Fast, right?

10> hard_task:execute(100).
elapsed time: 1.001 seconds
ok

What about 1,000,000? Let’s try it!

  1. server:init(1000000);
Sofias-MacBook-Pro:gorilla_blog sofiaprado$ erl -P 2000000
Erlang/OTP 21 [erts-10.0.3] [source] [64-bit] [smp:8:8] [ds:8:8:10] [async-threads:1] [hipe] [dtrace]

Eshell V10.0.3  (abort with ^G)
1> hard_task:execute(1000000).
elapsed time: 8.236 seconds
ok
2>

 

I know — it’s crazy. If it took 8.236 seconds to run 1,000,000 processes, that means the server ran 121,418.16416 processes per second. Amazing!

If you are still not a believer, I invite you to go and try this exercise in another programming language.

Hopefully now it makes sense why companies like WhatsApp and Facebook use Erlang for the high concurrency required by their systems.

If your program needs to run thousands of processes at the same time, you should consider Erlang. Due to the immutability and the architecture of its virtual machine (BEAM), processes are very cheap in terms of memory.

These were just a couple of examples showing how concurrency works in Erlang, but there are many more interesting features that the language has. I will keep you updated as I continue learning more!

Ready to be Unstoppable? Partner with Gorilla Logic, and you can be.

TALK TO OUR SALES TEAM