Introduction
Project Context
This project was carried out as part of the Parallel and Concurrent Programming (PPC) course. The objective was to design and implement a concurrent system that makes explicit use of the concepts studied during the course, such as process management, inter-process communication, and shared state synchronization. The chosen subject is a simple ecosystem simulation inspired by the idea of a “circle of life”. The system models interactions between different entities evolving over time:
- an environment, which represents the global ecosystem,
- prey agents, which consume environmental resources,
- predator agents, which depend on the prey population for survival.
Each entity is implemented as an independent process, allowing the simulation to naturally exhibit concurrent behavior. This design makes it possible to observe how multiple processes interact, share information, and evolve simultaneously, which directly reflects the type of problems addressed in parallel and concurrent programming.
Objectives of the Project
The main objective of this project is to put into practice the core concepts introduced in the PPC course through a concrete and observable system.
More specifically, the project aims to:
- create and manage multiple concurrent processes
- implement explicit inter-process communication (IPC) mechanisms
- share and synchronize a global state between independent processes
- design a clear and understandable process architecture
Design and Technical Choices
Multi-Process Architecture
The project is based on a multi-process architecture, where each major component of the simulation runs in its own independent process. Separating the environment, agents, and display into different processes makes concurrency issues explicit. Each process has its own execution flow and internal state, and interactions between components must be handled through well-defined communication mechanisms. This approach avoids hidden shared execution and makes the behavior of the system easier to reason about.
Role of the Main Process
The main process acts as the central control process of the application. Its responsibility is limited to launching the environment and display processes, handling user commands received from the display, and spawning new prey or predator processes when requested. Keeping the spawning logic in the main process simplifies the overall control flow. Since user interaction is already handled there, centralizing process creation avoids unnecessary coupling between the environment and the user interface. This design choice keeps responsibilities clearly separated and improves readability.
Central Environment Process
The environment process is responsible for maintaining the global state of the ecosystem and updating it at regular intervals, for example by managing resource growth and environmental conditions. It also handles the registration of new agents through a TCP socket and publishes state updates to the display process.
The global state is stored in shared memory and can be accessed by all processes. While the environment process performs some updates, agent processes also modify parts of the shared state during execution, such as resource consumption or population changes. All accesses and modifications to the shared state are synchronized using a semaphore, ensuring consistency and preventing race conditions.
Shared Memory for Global State
Shared memory is used to store the global state of the simulation. It contains a compact representation of the ecosystem, including resource levels and population counts.
This mechanism was chosen because it allows fast access to frequently read data while remaining simple and efficient. Agent processes periodically read the shared memory to adapt their behavior to the current state of the ecosystem. Updates to shared memory are protected using a semaphore, ensuring safe access when multiple processes attempt to modify the state concurrently.
Socket-Based Coordination
A TCP socket is used as an explicit coordination mechanism between agent processes and the environment. When a prey or predator process starts, it connects to the environment socket and sends a short message identifying its type.
This exchange allows the environment to register the agent and update population counters accordingly. The socket is only used during the initialization phase of agents and does not carry continuous state updates, which remain handled by shared memory. This clear separation of responsibilities keeps communication simple and explicit.
Message Queues for Interaction and Logging
Message queues are used to manage communication related to user interaction and logging. They allow the environment and agent processes to send information to the display process asynchronously, without blocking their execution.
Queues are also used to transmit user commands from the display to the main process. This approach ensures that user interaction remains responsive and decoupled from the simulation logic, while providing a safe and synchronized communication mechanism.
Use of Signals for Asynchronous Events
UNIX signals are used to handle asynchronous global events affecting the simulation, such as changes in environmental conditions. The environment process registers a signal handler that updates its internal state when a signal is received.
This mechanism allows external events to influence the simulation without interrupting its main execution loop, illustrating the use of signals as an additional IPC mechanism in a concurrent system.
Architecture and Communication Protocols
This section describes the concrete organization of the system and the communication mechanisms used between processes during execution. It focuses on the structure of the application and the data exchanges that occur at runtime.
Process Organization
The application runs as a set of independent processes:
- a main process, which starts the application and manages process creation,
- an environment process, which maintains the global simulation state,
- multiple agent processes, representing prey and predators,
- a display process, which handles visualization and user input.
Communication and coordination between these processes rely on explicit inter-process communication mechanisms, which are described in the following subsections.
Environment Process and Shared State
At startup, the environment process initializes the shared memory segment used to store the global state of the ecosystem. This shared state contains numerical values representing the current resource level and the population sizes.
During execution, the environment process runs a periodic loop. At each iteration, it updates part of the global state, such as resource growth and environmental conditions, writes the updated values to shared memory, and sends a formatted status message to the display process through a message queue. The environment process also maintains a TCP socket server that listens for incoming connections from agent processes.
Agent Processes and Socket Communication
Each prey or predator agent is executed as a separate process. When an agent starts, it establishes a TCP connection to the environment process and sends a short message identifying its type.
Upon receiving this message, the environment process updates the corresponding population counter in the shared state. After this initialization step, the socket connection is no longer used by the agent. During execution, agent processes periodically read the shared memory to retrieve the current global state and update their internal variables accordingly. Agent processes may also modify the shared state during execution, for example when consuming resources or when an agent dies.
Synchronization of Shared Memory Access
Access to shared memory is synchronized using a semaphore. Whenever a process modifies the shared state, it acquires the semaphore before reading and writing the values, and releases it afterwards.
This mechanism ensures that concurrent updates to shared memory are performed safely and that the shared state remains consistent throughout execution.
Message Queues and User Commands
Message queues are used to exchange information between processes without blocking execution.
- The environment process sends status updates to the display process.
- Agent processes send log messages to the display process.
- The display process sends user commands to the main process.
These queues are polled regularly and allow asynchronous communication between components.
Signal Handling
The environment process registers a signal handler to receive UNIX signals during execution. When a signal is received, the handler updates the internal state of the environment. The effect of this update is reflected in subsequent iterations of the simulation loop.
Runtime Interaction Summary
At runtime, the system operates as follows: the environment updates and publishes the global state, agent processes independently observe this state and evolve, the display process presents the current state and captures user input, and the main process reacts to user commands by creating new agent processes or terminating the simulation.
Algorithms
This section describes the core algorithms of the simulation using high-level pseudo-code, focusing on process behavior rather than implementation details.
Environment Process
Create shared memory and initialize (grass, prey_count, predator_count)
Create semaphore for synchronized access to shared memory
Open TCP server socket for agent registration
Register signal handler for drought toggle
While simulation is running:
Start time measurement for the tick
If a TCP connection is available:
Receive agent type message (PREY or PRED)
Acquire semaphore
Increment the corresponding population counter in shared memory
Release semaphore
Close connection
If drought is not active:
Acquire semaphore
Increase grass value (capped at 100)
Release semaphore
Read current state from shared memory
Send "grass,prey,pred,drought" to the display via status queue
Sleep until the next tick to keep a fixed tick rate
Prey Agent Process
Connect to the environment TCP socket
Send PREY identifier
Connect to shared memory
Initialize energy to ENERGY_START
While energy is greater than zero:
Sleep for one tick
Decrease energy by DAMAGE_PER_TICK
Read grass, prey count, predator count from shared memory
If energy is below THRESHOLD_H and grass is available:
Decrease grass by GRASS_EAT_AMOUNT
Increase energy by FEED_GAIN
If energy reaches zero or below:
Decrease prey counter in shared memory
Terminate the process
Predator Agent Process
Connect to the environment TCP socket
Send PRED identifier
Connect to shared memory
Initialize energy to ENERGY_START
While energy is greater than zero:
Sleep for one tick
Decrease energy by DAMAGE_PER_TICK
Read grass, prey count, predator count from shared memory
If energy is below THRESHOLD_H and prey are available:
Decrease prey counter by one
If prey was successfully consumed:
Increase energy by FEED_GAIN
If energy reaches zero or below:
Decrease predator counter in shared memory
Terminate the process
Display Process
Initialize the text-based user interface
Initialize a buffer for log messages
While the application is running:
Read state updates from the environment process
Display current grass level, prey count, predator count, and drought state
Read log messages from agents and display them
If a user input is detected:
If Q is pressed, send QUIT commend to main process
If P is pressed, send SPAWN_PRED command
If Y is pressed, send SPAWN_PREY command
If D is pressed, send TOGGLE_DROUGHT command
Sleep briefly to keep the interface responsive
Main Process
Create communication queues for state updates, commands, and logs
Start the environment process
Wait briefly for initialization
Start the display process
While the display process is running:
If a command is received from the display:
If command is QUIT, stop the simulation
If comand is SPAWN_PREY, create a prey process
If command is SPAWN_PRED, create a predatour process
If command is TOGGLE_DROUGHT, send a signal to the environment process
On termination:
Stop the environment process
Stop the display process
Stop all active agent processes
Implementation Plan and Testing
The project was developed following a progressive and incremental approach, allowing each component to be implemented and validated independently before full integration. This method helped reduce complexity and made it easier to identify and correct issues related to concurrency and communication.
The first development step focused on the environment process. This included implementing the main simulation loop, managing internal state variables, and setting up the timing mechanism. At this stage, the correctness of the simulation rhythm and state updates was verified without involving other processes.
The second step consisted in implementing shared memory access. The shared state structure was defined, and read/write operations were tested to ensure that values were correctly stored and retrieved across processes. Synchronization using a semaphore was added at this stage to guarantee safe concurrent access.
Once shared memory was functional, agent processes (prey and predators) were implemented. Each agent was tested independently to verify its life cycle, including energy consumption, interaction with the shared state, and termination conditions. Socket-based communication was then introduced to allow agents to announce themselves to the environment at startup.
The next phase involved integrating message queues and the display process. This allowed the visualization of the simulation state and the transmission of user commands. Special attention was given to ensuring that communication through queues remained non-blocking and did not interfere with the simulation loop.
Testing was performed continuously throughout development. Several scenarios were executed, such as spawning multiple agents, observing population evolution over time, triggering environmental events, and terminating the simulation via user commands. These tests helped confirm that processes remained responsive, shared data stayed consistent, and communication mechanisms behaved correctly under concurrent execution.
Execution and Usage
The application is executed from a single entry point, which initializes and launches all required processes. Running the program starts the main process, which in turn launches the environment process and the display process. Once these components are active, the simulation begins automatically.
During execution, the display process presents the current state of the ecosystem in real time. Information such as population sizes, resource levels, and environmental conditions is continuously updated based on the data received from the environment process. The display also shows log messages generated by agent processes, allowing the user to observe significant events occurring during the simulation.
User interaction is handled through the display interface. The user can issue commands to create new prey or predator agents, trigger environmental events, or stop the simulation. These commands are transmitted asynchronously to the main process using message queues, ensuring that user input does not interfere with the execution of the simulation or the behavior of other processes.
The simulation runs continuously until a termination command is issued. When the program is stopped, all active processes are terminated and the simulation is halted. The environment process is responsible for releasing shared memory resources when it exits, ensuring that no persistent shared state remains.
Overall, the execution workflow is designed to be simple and intuitive for the user, while preserving the concurrent nature of the system and the independence of each proces.
Critical Review and Perspectives
Developing this project highlighted several challenges inherent to parallel and concurrent programming. One of the main difficulties was reasoning about interactions between independent processes while ensuring that the global state remained consistent at all times. Designing a system where multiple agents evolve concurrently required careful control of shared data and clear communication boundaries.
A significant challenge was handling shared memory synchronization. Allowing multiple processes to update population values safely required explicit synchronization to avoid race conditions. This was addressed by introducing a semaphore to protect shared memory modifications. While this solution is effective, it also introduced additional complexity and required careful testing to ensure that deadlocks and inconsistent states did not occur.
Another difficulty concerned the distribution of responsibilities between processes. Deciding which process should manage global state, which should handle user interaction, and how agent creation should be coordinated required several iterations. The final architecture, where the main process spawns agents and the environment maintains the global state, proved to be clear and manageable, while remaining compliant with the project specifications.
From a broader perspective, the project could be improved in several ways. Agent behavior could be made more complex, for example by introducing reproduction rules, competition between predators, or spatial constraints. The visualization could also be enhanced to provide a more detailed or graphical representation of the ecosystem. Additionally, the architecture could be extended to support distributed execution across multiple machines.
Despite these possible improvements, the current implementation successfully meets the objectives of the project. It demonstrates a correct and explicit use of multiple IPC mechanisms and provides a clear example of how concurrent processes can interact within a controlled and well-structured system.
Annex - Use of Artificial Intelligence Tools
This annex describes the use of Artificial Intelligence tools during the realization of this project, in accordance with the course instructions.
AI tools were used throughout the project as support tools, mainly to assist with understanding, debugging, and documentation tasks.
During the development phase, AI tools were used to help analyze technical issues encountered in the code. In particular, they were useful when investigating problems related to concurrency, inter-process communication, and synchronization. This included helping interpret error messages, reason about unexpected behaviors, and explore possible causes of runtime issues. All corrections and tests were performed directly by us.
AI tools were also used as a discussion aid during the design phase. They helped clarify architectural ideas, reflect on different ways of organizing process interactions, and verify consistency with the concepts studied in the course. These exchanges supported the design process but did not dictate the final architechure.
Finally, AI assistance was used to improve the clarity and structure of the written report. This included reformulating explanations, organizing sections according to the evaluation criteria, and improving the overall readability of the document.