Parinaz Barakhshan, PhD Projects

Innovative Researcher

Showcasing Expertise

Throughout my journey in the field of Computer Engineering, I have dedicated myself to impactful projects that have shaped my expertise. I invite you to explore my portfolio and witness the results of my dedication.

My Portfolio

Welcome to my portfolio. Here you will find a selection of my work. Explore my projects to learn more about what I do.

𝗫𝗽𝗲𝗿𝘁 𝗡𝗲𝘁𝘄𝗼𝗿𝗸 𝗣𝗿𝗼𝗷𝗲𝗰𝘁

Developed best practice guidelines and a tailored tool catalog for computational professionals supporting researchers.

Visit Website

𝗔𝘁𝗼𝗺 𝗣𝗼𝗿𝘁𝗮𝗹

Designed and developed to facilitate access to atomic data for the research community, and to evaluate best practices identified through the Xpert Network Project.

Visit Website

𝗜𝗻𝘁𝗲𝗿𝗮𝗰𝘁𝗶𝘃𝗲 𝗖𝗲𝘁𝘂𝘀 (𝗶𝗖𝗲𝘁𝘂𝘀) 𝗢𝗽𝘁𝗶𝗺𝗶𝘇𝗲𝗿

Designed and developed an interactive parallelizer tool to optimize C applications using the Cetus auto-parallelizer by adding OpenMP directives to the code.

Visit Website

𝗘𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝗰𝘆 𝗘𝗻𝗵𝗮𝗻𝗰𝗲𝗺𝗲𝗻𝘁𝘀 𝗶𝗻 𝗔𝘂𝘁𝗼-𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹𝗶𝘇𝗲𝗿𝘀

Researched disparities between auto-parallelized and manually parallelized codes to enhance the efficiency of auto-parallelizers. 

Read More

𝗖𝗮𝗥𝗩 (𝗖𝗮𝗽𝘁𝘂𝗿𝗲, 𝗥𝗲𝗽𝗹𝗮𝘆 & 𝗩𝗮𝗹𝗶𝗱𝗮𝘁𝗲) 𝗧𝗼𝗼𝗹

Developed an innovative tool and methodology for optimizing code sections in long-running applications, ensuring correctness and performance.

CaRV Executable Access

𝗜𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗼𝗻 𝗣𝗿𝗼𝗷𝗲𝗰𝘁

Integrated the CaRV tool into iCetus to expedite and validate code segment optimization. Added OpenAI’s GPT-4.0 to assist in suggesting optimizations.

Visit Website

My Publications

You can scroll through my publications and access them by clicking on the ‘Show Publication’ button.

Exchanging Best Practices and Tools for Supporting Computational and Data-Intensive Research, The Xpert Network

We present best practices and tools for professionals who support computational and data-intensive (CDI) research projects. The practices resulted from an initiative that brings together national projects and university teams that include individual or groups of such professionals. We focus particularly on practices that differ from those in a general software engineering context. The paper also describes the initiative ś the Xpert Network ś where participants exchange successes, challenges, and general information about their activities, leading to increased productivity, efficiency, and coordination in the ever-growing community of scientists that use computational and data-intensive research methods.

Portal for High-Precision Atomic Data and Computation

In many applications, ranging from studies of fundamental physics to the development of future technologies, accurate atomic theory is indispensable to the design and interpretation of experiments. Direct experimental measurement of relevant parameters is often infeasible if not impossible.
This paper reports the release of Version 1 of an online atomic portal for high-precision atomic data and computation that provides such information to a wide community of users.
Version 1 of the portal provides transition matrix elements, transition rates, radiative lifetimes, branching ratios, hyperfine constants, quadrupole moments, and scalar and dynamic polarizabilities for atoms and ions. Version 1 includes data for the elements and ions Li, Be+, Na, Mg+, K, Ca+, Rb, Sr+, Cs, Ba+, Fr, and Ra+. The atomic properties are calculated using a high-precision, linearized coupled-cluster method.

Automatic and Interactive Program Parallelization Using the Cetus Source to Source Compiler Infrastructure v2.0

This paper presents an overview and evaluation of the existing and newly added analysis and transformation techniques in the Cetus source-to-source compiler infrastructure. Cetus is used for research on compiler optimizations for multi-cores with an emphasis on automatic parallelization. The compiler has gone through several iterations of benchmark studies and implementations of those techniques that could improve the parallel performance of these programs. This work seeks to measure the impact of the existing Cetus techniques on the newer versions of some of these benchmarks. In addition, we describe and evaluate the recent advances made in Cetus, which are the capability of analyzing subscripted subscripts and a feature for interactive parallelization. Cetus started as a class project in the 1990s and grew with support from Purdue University and from the National Science Foundation (NSF), as well as through countless volunteer projects by enthusiastic students. While many Version-1 releases were distributed via the Purdue download site, Version 2 is being readied for release from the University of Delaware.
Keywords: automatic parallelization; subscripted subscript analysis; interactive parallelization

iCetus: A Semi-automatic Parallel Programming Assistant

The iCetus tool is a new interactive parallelizer, providing users with a range of capabilities for the source-to-source transformation of C programs using OpenMP directives in shared memory machines. While the tool can parallelize code fully automatically for non-experts, power users can steer the parallelization process in a menu-driven way. iCetus which is still in its early stages of development is implemented as a web application for easy access, eliminating the need for user installation and updates. The tool supports the user through all phases of the program transformation process, including program analyses, parallelization, and optimization. The first phase includes both static and dynamic analyses, pointing out loops that represent performance bottlenecks and should be improved. The parallelization phase offers diverse options to cater to different levels of user skills. By displaying compiler analyses results in an interactive manner, iCetus supports the user in pinpointing parallelization impediments and resolving them. During the optimization phase, the programmer can apply successive improvements by editing the program, evaluating the performance, and comparing it to that obtained by previous program versions. iCetus also serves as a learning tool to help users understand important program patterns and their parallelization. In this way, it also helps train the user in writing code that likely yields better performance.

Exchanging Best Practices for Supporting Computational and Data-Intensive Research, The Xpert Network

We present best practices for professionals who support computational and data-intensive (CDI) research projects. The practices resulted from the Xpert Network activities, an initiative that brings together major NSF-funded projects for advanced cyberinfrastructure, national projects, and university teams that include individuals or groups of such professionals. Additionally, our recommendations are based on years of experience building multidisciplinary applications and teaching computing to scientists. This paper focuses particularly on practices that differ from those in a general software engineering context. This paper also describes the Xpert Network initiative where participants exchange best practices, tools, successes, challenges, and general information about their activities, leading to increased productivity, efficiency, and coordination in the ever-growing community of scientists that use computational and data-intensive research methods.

Application of Software Engineering in Building the Portal for High-Precision Atomic Data and Computation

The Atom portal, udel.edu/atom, provides the scientific community with easily accessible high-quality atomic data, including energies, transition matrix elements, transition rates, radiative lifetimes, branching ratios, polarizabilities, and hyperfine constants for atoms and ions. The data are calculated using a high-precision state-of-the-art linearized coupled-cluster method. All values include estimated uncertainties. Where available, experimental results are provided with references. This paper describes some of the software engineering approaches applied in the development of the portal.

A comparison between Automatically versus Manually Parallelized NAS Benchmarks

By comparing automatically versus manually parallelized NAS Benchmarks, we identify code sections that differ, and we discuss opportunities for advancing auto-parallelizers. We find ten patterns that challenge current parallelization technology. We also measure the potential impact of advanced techniques that could perform the needed transformations automatically. While some of our findings are not surprising and difficult to attain – compilers need to get better at identifying parallelism in outermost loops and in loops containing function calls – other opportunities are within reach and can make a difference. They include combining loops into parallel regions, avoiding load imbalance, and improving reduction parallelization.

Advancing compilers through the study of hand-optimized code is a necessary path to move the forefront of compiler research. Very few recent papers have pursued this goal, however. The present work tries to fill this void.

A Portal for High-Precision Atomic Data and Computation: Design and Best Practices

The Atom portal, udel.edu/atom, provides the scientific community with easily accessible high-quality data about properties of atoms and ions, such as energies, transition matrix elements, transition rates, radiative lifetimes, branching ratios, polarizabilities, and hyperfine constants. The data are calculated using a high-precision state-of-the-art linearized coupled-cluster method, high-precision experimental values are used where available. All values include estimated uncertainties. Where available, experimental results are provided with references. This paper provides an overview of the portal and describes the design as well as applied software engineering practices.

Learning from Automatically Versus Manually Parallelized NAS Benchmarks

By comparing automatically versus manually parallelized NAS Benchmarks, we identify code sections that differ, and we discuss opportunities for advancing auto-parallelizers. We find ten patterns that challenge current parallelization technology. We also measure the potential impact of advanced techniques that could perform the needed transformations automatically. While some of our findings are not surprising and difficult to attain – compilers need to get better at identifying parallelism in outermost loops and in loops containing function calls – other opportunities are within reach and can make a difference. They include combining loops into parallel regions, avoiding load imbalance, and improving reduction parallelization.

Advancing compilers through the study of hand-optimized code is a necessary path to move the forefront of compiler research. Very few recent papers have pursued this goal, however. The present work tries to fill this void.

CaRV -- Accelerating Program Optimization through Capture, Replay, Validate

This paper presents a new methodology and tool that speeds up the process of optimizing science and engineering programs. The tool, called CaRV (Capture, Replay, and Validate), enables users to experiment quickly with large applications, comparing individual program sections before and after optimizations in terms of efficiency and accuracy. Using language-level checkpointing techniques, CaRV captures the necessary data for replaying the experimental section as a separate execution unit after the code optimization and validating the optimization against the original program. The tool reduces the amount of time and resources spent on experimentation with long-running programs by up to two orders of magnitude, making program optimization more efficient and cost-effective.

Best Practices for Developing Computational and Data-Intensive (CDI) Applications

High-quality computational and data-intensive (CDI) applications are critical for advancing research frontiers in almost all disciplines. Despite their importance, there is a significant gap due to the lack of comprehensive best practices for developing such applications. CDI projects, characterized by specialized computational needs, high data volumes, and the necessity for cross-disciplinary collaboration, often involve intricate scientific software engineering processes. The interdisciplinary nature necessitates collaboration between domain scientists and CDI professionals (Xperts), who may come from diverse backgrounds.
This paper aims to close the above gap by describing practices specifically applicable to CDI applications. They include general software engineering practices to the extent that they exhibit substantial differences from those already described in the literature as well as practices that have been called pivotal by Xperts in the field.
The practices were evaluated using three main metrics: (1) participants’ experience with each practice, (2) their perceived impact, and (3) their ease of application during development. The evaluations involved participants with varying levels of experience in adopting these practices. Despite differing experience levels, the evaluation results consistently showed high impact and usability for all practices.
By establishing a best-practices guide for CDI research, the ultimate aim of this paper is to enhance CDI software quality, improve approaches to computational and data-intensive challenges, foster interdisciplinary collaboration, and thus accelerate scientific innovation and discovery.

Connect With Me