“Functional programming is often regarded as the best-kept secret of scientific modelers, mathematicians, artificial intelligence researchers, financial institutions, graphic designers, CPU designers, compiler programmers, and telecommunications engineers.”
If you want to create multiple Scala Futures and merge their results together to get a result in a
for comprehension, the correct approach is to (a) first create the futures, (b) merge their results in a
for comprehension, then (c) extract the result using
onComplete or a similar technique.
“Processes interact by one method, and one method only, by exchanging messages. Processes share no data with other processes. This is the reason why we can easily distribute Erlang programs over multicores or networks.”
Joe Armstrong, in his book,
Programming Erlang: Software for a Concurrent World
“I never teach my pupils. I only attempt to provide the conditions in which they can learn.”
I kept several audiences in mind as I wrote this book:
Netflix has a good, short article on their “journey to asynchronous programming.”
“In Erlang it’s OK to mutate state within an individual process but not for one process to tinker with the state of another process ... processes interact by one method, and one method only, by exchanging messages. Processes share no data with other processes. This is the reason why we can easily distribute Erlang programs over multicores or networks.”
Joe Armstrong, in the book Programming Erlang
This is another good quote from a blog post titled The Downfall of Imperative Programming: “Did you notice that in the definition of ‘data race’ there’s always talk of mutation?”
We should grow things (software applications) by adding more small communicating objects, rather than making larger and larger non-communicating objects.
Concentrating on the communication provides a higher level of abstraction than concentrating on the function APIs used within the system. Black-box equivalence says that two systems are equivalent if they cannot be distinguished by observing their communication patterns. Two black-boxes are equivalent if they have identical input/output behavior.
When we connect black boxes together we don't care what programming languages have been used inside the black boxes, we don't care how the code inside the black boxes has been organized, we just have to obey the communication protocols.
Erlang programs are the exception. Erlang programs are intentionally structured as communicating processes — they are the ultimate micro-services.
Large Erlang applications have a flat “bus like” structure. They are structured as independent parallel applications hanging off a common communication bus. This leads to architectures that are easy to understand and debug and collaborations which are easy to program.
~ From this post by Joe Armstrong, author of the book Programming Erlang: Software for a Concurrent World
In the “Maintaining the Erlang View of the World” section of his book Programming Erlang: Software for a Concurrent World, Joe Armstrong writes, “The Erlang view of the world is that everything is a process, and that processes can interact only by exchanging messages. Having such a view of the world imposes conceptual integrity on our designs, making them easier to understand.”
Strictly speaking, Amdahl’s Law isn’t only about speeding up serial programs by using parallel processing techniques, but in practice that’s often the case. Here’s a description from Wikipedia:
“Amdahl's law is often used in parallel computing to predict the theoretical speedup when using multiple processors. For example, if a program needs 20 hours using a single processor core, and a particular part of the program which takes one hour to execute cannot be parallelized, while the remaining 19 hours (p = 0.95) of execution time can be parallelized, then regardless of how many processors are devoted to a parallelized execution of this program, the minimum execution time cannot be less than that critical one hour. Hence, the theoretical speedup is limited to at most 20 times (1/(1 − p) = 20). For this reason parallel computing is relevant only for a low number of processors and very parallelizable programs.”