Close
Multicore Challenges

Slider Title

Short Excerpt

See More
Trends of progress in multicore hardware / software industry

Hardware progress in computer science has followed Moore’s law for decades and de facto solved many issues such as platforms slowing down or lack of resources. However, in the mid 2000s, Moore’s law has proved to be over in regard to processing capacity progress for each core, so the industry has switched to a multi-core environment which has given much more development of progress.

Software developers are used to integrate multi-threading in the design of their programs but they are facing multiple challenges:

  1. Developers do not know in advance how many cores will be available for their software
  2. Developers develop software on the understanding that it will be running jointly with other software, but they are unable to anticipate how they will interoperate
  3. Threads running in sequence will be subject to inter-core communication latency
  4. Software performance is very difficult to control and almost impossible to guarantee due to inter-core communication.

So hardware progress which was expected to solve the issue actually makes it worse for developers, IT managers, and business managers.

Multi-core dilemma for developers

The multi-core dilemma presents the biggest near-term challenge to progress in controlled processing.

We argue that the multicore dilemma is actually a substantially worse problem than generally understood: how come a software perfectly designed and proven by effective testing looks like significantly bugged when acting in real operation fields?

The only acceptable long-term solution to this problem is the real parallelization of programmer code. Programmers need tools to reach this goal. However, it is uncomputable in the general case to automatically parallelize imperative programming languages, because the data dependency graph of a program cannot be precisely determined without actually running the code, yet most “mere mortal” programmers are not able to be productive in functional programming languages (which are, in fact, implicitly parallel).

We need on the developer side new programming tool kits that allows programmers to work in a subset of the imperative style toward safely parallelize our code. And on the operation side, IT managers need accurate facility to organise and regulate the implementation of the software on the operating platform and control efficiently the intercore communication.

General-purpose instruction processors have dominated computing for a long time. However, they tend to lose performance when dealing with non-standard operations and non-standard data not supported by the instruction set format.The need for customizing and performance improvement generated new models embedding hardware accelerators in software area: FPGA, DSP, ASIP, ASIC. It drives to deal with significant tradeoffs including the cost, the flexibility, the ability to update the processing model or the lack of adequate compiler supports.

Down the road, the infrastructure is heterogeneous and the jump from one component to the other creates latency, serialization issues which are swallowing a large part of the savings occurring from the concept. Indeed these solutions are more efficient for specific processing embedded in devices (cell phones, medical appliances, digital cameras, game devices, printers).

TREDZONE innovation came from the confrontation between the programming paradigm model and the hardware foreseeable evolution analysis: the way CPUs are operated is creating complex dependency between separate threads having to work together.

Rethink the multicore programming

The unexpected high latency or reduced throughput comes primarily from the interconnection between core in the CPU, creating an entropic phenomenon, ending up with a damaging exponential model incurred by combination of:

  1. Number of communicating threads contributing to one software
  2. Number of programs installed in one server
  3. Number of cores in the CPU

The more the number of these components increases, the more intercore communication grows, hence reducing control of the processing and creating unexpected delays and starvation.

The solution is not to generate an automatically parallelizing language, because most “mere mortal” programmers are not able to be productive in such language. We have created a new class of development tools based on the requirements of:

  1. The business applications of our targeted industries such as the capital markets: workflow event-based programming, reactivity, elasticity, low-latency, real-time
  2. The software developers: sequential programming, multithreading management, optimized framework, modularity, programming language freedom
Actor model for multicore

The actor model is a concurrent model of computation that treats “actors” as the universal primitives of concurrent computation. Hence, an Actor is the main structural element of any actor model. By definition, an Actor waits for an event message to be received by another Actor, which is then treated by an Event handler. The Event handler can execute a local function, create more actors, and send events to other Actors.

The Event communication between two Actors is done with a dedicated uni-directional communication channel called a Pipe. Hence, the Actor programming model is completely asynchronous and event-driven. The Multithreading model is no longer required which makes everything much simpler. As a developer, you just code mono-threaded event driven Actors and their Events.

A Workflow is a program consisting of multiple Actors linked together with several Pipes and handling one or multiple Events. A Workflow solves one particular business problem. Once the Workflow skeleton is designed, the programming is broken down into coding single threaded Event handlers, which is the programming habit of all the programmers.

The business use cases are described as combinations of workflows using the Actor model which is scalable and parallel by nature. In addition, the programming is done in single threaded mode by the developers which removes the difficult burden of designing optimized parallel programs from their shoulders. Hence, the architects and the developers are working in their comfort zone. And TREDZONE framework will handle all the rest and bridge the gap between parallel programming and hardware multicore complexity.