PhD Thesis: Pilar Gómez Sánchez: June 22, 2018, 12:00. (Assistant Professor UAB)

Title: Analyzing the Parallel Applications' I/O Behavior Impact on HPC Systems.

TDX Source:


The volume of data generated by scientific applications grows and the pressure on the I/O system of HPC systems also increases. For this reason, an I/O behavior model is proposed for scientific MPI (Message Passing Interface) parallel applications. The goal is to analyze the applications' impact on the I/O system. Analyzing the MPI parallel applications at POSIX-IO level allows observing how the application's data are treated at that level.

In this research work, the following is presented: the I/O behavior model definition at POSIX-IO level (PIOM-PX model definition), the methodology applied to extract this model and the PIOM-PX-Trace-Tool. As PIOM-PX is based on the I/O phase concept, it can identify the more significant phases. Phases that have more influence than others in the I/O system and they could provoke a bottleneck or a poor performance. Analysis based on I/O phases allows identifying, delimiting, and trying to reduce each phase's impact on the I/O system.

PIOM-PX is part of proposed model PIOM. PIOM integrates the I/O behavior model at POSIX-IO level (PIOM-PX) and the I/O behavior model at MPI-IO level (PIOM-MP, formerly known as PAS2P-IO). The model provides the information necessary to replicate an application's behavior in different systems using synthetic programmables programs. PIOM-PX-Trace-Tool allows interception of POSIX-IO instructions used during the application execution. The experiments carried out are executed in several standar HPC systems and the Cloud platform, where it is able to test the utility of the proposed model PIOM.


PhD Thesis: Cecilia Elizabeth Jaramillo Jaramillo, : July 21, 2017, 11:00. (Researcher at Computer Science Department. Universidad ISRAEL. Quito, Ecuador)

Title: Modelización y Simulación de la transmisión por contacto de una infección nosocomial en el servicio de urgencias hospitalarias.

TDX Source:


The nosocomial infection is an infection caused by microorganisms acquired within sanitary environments and is one of the main threats faced by hospitalized patients. Methicillin Resistant Staphylococcus Aureus (MRSA) is one of the most common and dangerous microorganisms in hospital settings and it could causes serious skin, wound, organ and even blood-borne infections (bacteremia).

In  a  healthcare  environment,  such  as  the  emergency  department,  constant interaction between patients, healthcare workers and the environment contributes to MRSA transmission. The most common routes of transmission are the hands of the healthcare workers and contaminated medical instruments or objects of the environment. To counteract the transmission, health services have implemented certain actions called infection control measures.

This research addresses the issue of the transmission of nosocomial infection by  contact  in  a  emergency  service  using  the  capacity  that  agent-based  simulation possesses to represent social phenomena and human dimension. Agent-based computational models allow us to evaluate potential solutions to specific situations in a virtually created environment.

As a result of this research, a simulation tool of contact transmission of MRSA has been obtained, the MRSA-T-Simulator. The main objective of this tool is to allow the construction of virtual scenarios in order to study the phenomenon of MRSA transmission and to evaluate the potential impact of the implementation of different infection control measures on propagation rates.


PhD Thesis: Eva BruBalla: July 21, 2017, 9:30. (Assistant Professor at Gimbernat Schools, Spain)

Title: Scheduling non critical patients' admission in a hospital emergency department. 

TDX Source:


The increase in life expectancy, the progressive growth of aging and a greater number of chronic diseases are factors that contribute significantly to the growing demand for urgent medical care and, consequently, in many cases, to the saturation of the Emergency Departments (ED). Taking into account also the limitations on available resources, this constant risk of ED saturation is one of the most important current problems in health systems around the world, since it often results in an excessive length of stay of patients in the service and, consequently, generates dissatisfaction.

The results presented in this study aim to contribute to the improvement of the quality of care provided in EDs. We propose a method to try to reduce the total length of stay of the patients in the service, through a model for planning the arrival of non-critical patients to it. The model is based on the detailed characterization of the system in terms of its attention capacity and the number of patients attending each hour dynamically. The use of the simulation allows us to obtain knowledge about the behavior of the system through the prediction of patient waiting times for a specific situation or scenario, determined by the way patients arrive at the service and the available sanitary staff resources. A first contribution of the research is the definition of an analytical model for the calculation of the theoretical throughput of a certain sanitary staff configuration. The objective of this first part of the research is to evaluate the responsiveness of sanitary staff to service demand, depending on the configuration of doctors, nurses, admission and triage personnel, and the model of patient flow throughout the service. The second contribution of the research that we present is the  definition of a model for scheduling the admission of non-critical patients into the service, by their redistribution with respect to the input pattern initially foreseen by the hospital's historical data. We have been able to verify the effectiveness of  the proposed scheduling model based on the information of the actual data provided by the Hospital de Sabadell, as reference hospital, and using the simulation to assess the results of its application.

The described research contributions offer the ED managers new knowledge about the behavior of the service, which may be relevant in decision making, regarding the improvement of service quality, of a great interest taking into account the expected growing demand of the service in a very near future.


PhD Thesis: Joe Carrion Jumbo: July 20, 2017, 11:00. (Researcher at Computer Science Department. Universidad ISRAEL. Quito, Ecuador)

Title: Mejorando la red de los servicios de motores de búsqueda a través de enrutameinto basado en aplicación.

TDX Source:


Large-scale computer systems like Search Engines provide services to thousands of users,  and their user demand can change suddenly.  This unstable demand impacts sensitively to the service components (like network and hosts). The system should be able to address unexpected scenarios; otherwise, users would be forced to leave the service.  A search engine has a typical architecture consisting of a Front Service, that processes the requests of users, an Index Service that stores the
information collected from the internet and a Cache Service that manages the efficient access to  content  frequently  used.  

The  scientific  advances  that  provide  these  services  are  in  general emergent technology.  The network services of a search engine require specialized planning; This research is carried out by studying the traffic pattern of a Search Engine and designing a routing model for messages between network nodes based on the data flow conditions of the Search Engine Service.  The expected result is a network service specialized in the traffic of a Search Engine that allocates network resources efficiently according to demand it supports in real time.  The evaluation of the traffic pattern allowed us to identify conditions of unbalance of the network and  congestion  of  messages.  

Therefore  model  designed  combines  different  routing  models  of the  literature  and  a  new  criteria  based  on  the  specific  conditions  of  the  traffic  of  the  Search Engine.  For the design of this proposal it has been necessary to design a scale model of a Search Engine using simulation techniques and It has has used traffic from a real system that allowed
us to accurately evaluate the proposed model and compare it with currently available routing models in the literature and technology.  The results show that the proposed model improves the performance of the Search Engine network in terms of latency and network throughput.


PhD Thesis: Francisco Borges: September 30, 12:00 hrs. 2016. (Assistant Professor at IFBA Instituto Federal de Educação, Ciência e Tecnologia da Bahia, Campus Santo Amaro. Bahia. Brazil)

Title: Care HPS: A High Performance Simulation Methodology for Complex Agent-Based Models.

TDX Source:


This thesis introduces a methodology to do research on HPC for complex agent based models that demand high performance solutions. This methodology, named Care High Performance Simulation (HPS), enables researchers to: 1) develop techniques and solutions of high performance parallel and distributed simulations for agent-based models; and, 2) study, design and implement complex agent-based models that require high performance computing solutions. This methodology was designed to easily and quickly develop new ABMs, as well as to extend and implement new solutions for the main issues of parallel and distributed simulations such as: synchronization, communication, load and computing balancing, and partitioning algorithms in order to test and analyze. Also, some agent-based models and HPC approaches and techniques are developed which can be used by researchers in HPC for ABMs that required high performance solutions.

A set of experiments are included with the aim of showing the completeness and functionality of this methodology and evaluate how the results can be useful. These experiments focus on: 1) presenting the results of our proposed HPC techniques and approaches which are used in the Care HPS; 2) showing that the features of Care HPS reach the proposed aims; and, 3) presenting the scalability results of the Care HPS. As a result, we show that Care HPS can be used as a scientific instrument for the advance of the agent-based parallel and distributed simulations field.


PhD Thesis: Albert Gutiérrez Millà: July 22, 10:00 hrs. 2016. (Researcher at Barcelona Supercomputing Center. CASE - Fusion Dpt.- Barcelona-Spain)

Title: Crowd Modeling and Simulation on High Performance Architectures.

TDX Source:


Management of security in major events has become crucial in an increasingly populated world. Disasters have incremented in crowd events over the last hundred years and therefore the safety management of the attendees has become a key issue. To understand and assess the risks involved in these situations, models and simulators that allow understand the situation and make decisions accordingly are necessary.

But crowd simulation has high computational requirements when we consider thousands of people. Moreover, the same initial situation can vary on the results depending on the non deterministic behavior of the population; for this we also need a significant amount of statistical reliable simulations. In this thesis we have proposed crowd models and focused on providing a DSS (Decisions Support System). The proposed models can reproduce the complexity of agents, psychological factors, intelligence to find the exit and avoid obstacles or move through the crowd, and recreate internal events of the crowd in case of high pressures or densities.

In order to model these aspects we use agent-based models and numerical methods. To focus on the applicability of the model we have developed a workflow that allows you to run in the Cloud DSS to simplify the complexity of the systems to the experts and only left to the them the configuration. Finally, to test the operation and to validate the simulator we used real scenarios and synthetic in order to evaluate the performance of the models.


PhD Thesis: Liu Zhengchun: July 22, 12:00 hrs. 2016. (Researcher Argonne National Laboratory. MSC Dpt. USA)

Title: Modeling & Simulation for Healtcare Operations Management Using High Performance Computing & Agent Based Model.

TDX Source:


Hospital based emergency departments (EDs) are highly integrated service units to primarily handle the needs of the patients arriving without prior appointment, and with uncertain conditions. In this context, analysis and management of patient flows play a key role in developing policies and decision tools for overall performance improvement of the system. However, patient flows in EDs are considered to be very complex because of the different pathways patients may take and the inherent uncertainty and variability of healthcare processes. Due to the complexity and crucial role of an ED in the healthcare system, the ability to accurately represent, simulate and predict performance of ED is invaluable for decision makers to solve operations management problems. One way to realize this requirement is by modeling and simulation.

Armed with the ability to execute a compute-intensive model and analyze huge datasets, the overall goal of this study is to develop tools to better understand the complexity (explain), evaluate policy (predict) and improve efficiencies (optimize) of ED units. The two main contributions of this thesis are: (1) An agent-based model for quantitatively predicting and analyzing the complex
behavior of emergency departments. (2) A simulation and optimization based methodology for calibrating model parameters under data scarcity.

Starting from simulating the emergency departments, our efforts proved the feasibility and ideality of using agent-based model & simulation techniques to study the healthcare system.


PhD Thesis: Javier Panadero Martínez: September 28, 12:00 hrs. 2015. (Researcher at Internet Interdisciplinary Institute (IN3) - Universitat Oberta de Catalunya. Barcelona-Spain) 

Title: Performane Prediction: analysis of the scalability of parallel applications.

TDX Source:


Executing message-­-passing applications using an elevated number of resources is not a trivial task. Due to the complex interaction between the message-­-passing applications and the HPC system, depending on the system, many applications may suffer performance inefficiencies, when they scale to a large number of processes. This problem is particularly serious when the application is executed many times over a long period of time.

With the purpose of avoiding these problems and making an efficient use of the system, as main contribution of this thesis, we propose the methodology P3S (Prediction of Parallel Program Scalability), which allows us to analyze and predict the strong scalability behavior for message-­passing applications on a given system.

The methodology strives to use a bounded analysis time, and a reduced set of resources to predict the application performance. The P3S methodology is based on analyzing the repetitive behavior of parallel message-­-passing applications. Such applications are composed of a set of phases, which are repeated through the whole application, independently of the number of application processes.


PhD Thesis: Adriana Gaudiani. Date: September 11, 12:00 hrs. 2015. (Associate Researcher at Science Institute. Universidad Nacional de General Sarmiento, Buenos Aires, Argentina)

Title: Simulación y Optimización como Metodología para Mejorar la Calidad de la Predicción en un Entorno de Simulación Hidrográfica.

External source:


This dissertation deals with the role of computation in HPC to improve the quality of the results of simulations, where computation is used to provide the best possible value for simulation model parameters.
Flooding is one of the most common natural hazards faced by human society. Modelling and computational simulation provide powerful tools which enable flood event forecasting. Nevertheless, a series of limitations cause a lack of accuracy in forecasting, such as the case of uncertainty in the values of the input parameters to the flood model.

In order to predict flood behaviour, we have developed a methodology focused on enhancing a flood simulator, EZEIZA V, (developed by the National Institute of Water (INA), Argentina) to minimize the difference between simulated and observed results, adjusting the input parameters, by using a two-phase optimization methodology via simulation.

In order to find the “optimum” set of input parameters, we reduced the search space using a “Monte Carlo + Clustering K-Means” method. As a result of this, we achieved   improvements of up to 35% which, for example, represents a significant difference of 0.5 to 1 meters of water level along whole Paraná River basin. 


PhD Thesis supervised by members of the group:



Campus d'excel·lència internacional U A B