4.6 C
New York
Friday, February 28, 2025

How to Optimize Your Chip Design Data and Manage Large Design Files


Key takeaways:

  • A modern chip design project can take up as much as a terabyte of disk space and involve hundreds of thousands of files.
  • At such data volumes and file counts, existing version control tools present several challenges that can negatively impact design productivity.
  • You need purpose-built domain-aware tools to manage your semiconductor design data properly.

Semiconductor product design is undergoing rapid advances like gate-all-around field-effect transistors, chiplets, new power delivery architectures, ever-smaller process nodes, and more.

Every advance complicates integrated circuit (IC) designs. Increasingly sophisticated design, synthesis, and verification tools consume and generate massive volumes of data.

In this blog, we explore the challenges of managing large design files and large volumes of data. We show how Keysight solutions can help you optimize all your semiconductor design data management problems.

An overview of the semiconductor design process

A screen shot of a computer Description automatically generated

Fig 1. IC design process

The typical stages of designing an integrated circuit (IC) like a system-on-chip (SoC) are shown above. Let’s quickly review the important stages, focusing on their input and output data:

  • Architectural design: The IC architects determine how the chip can achieve its specified business and functional goals while satisfying power, performance, and area constraints. At this stage, suitable intellectual property (IP) cores are selected.
  • Functional design and verification: Schematics are created to implement the architectural goals. Electronic design automation (EDA) tools check design rules automatically. Analog and digital simulations use tools like the simulation program with integrated circuit emphasis (SPICE) or other proprietary tools. These tools may require custom binary data and generate large result files.
  • Logic synthesis: The schematics are converted to text-format register transfer level (RTL) files in HDL code. EDA tools then generate gate-level netlists from the RTL code. The RTL code undergoes simulations and verifications to qualify the design. This stage may also involve prototypes using field-programmable gate arrays, hardware description language (HDL) code, and required configuration data in various text and binary formats.
  • Physical design: This stage converts the RTL design to a physical layout using place and route tools. It outputs physical layout files like the binary-format graphical design system II (GDSII) files. This is followed by design rule checking, layout versus schematic checks, electrical rule checking (ERC), clock tree synthesis, timing analysis, and physical verification using proprietary binary configuration and result files. The verified GDSII files are sent for fabrication at tape-out.

The average disk usage and number of files of different IC design projects are shown below.

A close-up of a computer Description automatically generated

Fig 2. IC design disk usage

How design data is traditionally stored and managed

Most semiconductor design projects are stored in general-purpose version-control systems (VCS) like Git or Subversion (SVN) to help track file changes and merge multiple teams’ work.

In a typical VCS, a central repository serves as the enterprise-wide single source of truth to store the master copies of all the design projects, design files, version history of each file, and metadata for each version.

The central repository (or repo) is then mirrored to multiple locations for storage redundancy. The mirrors are synced with the central repository and each other in real-time or periodically.

In addition to redundancy, the mirrors provide another important benefit: fast content delivery. By deploying them in locations that are geographically close to the design centers, they enable low-latency data access, which is critical for employee productivity and automated workflows like continuous testing.

Each mirror requires information technology (IT) infrastructure, such as application servers to run the VCS software and storage servers to store the data. Such IT infrastructure may be:

  • on-premises private cloud
  • collocated infrastructure in third-party data centers in the same city or nearby area
  • managed public cloud of some infrastructure service provider
  • hybrid cloud that combines private and public cloud infrastructure

Each design team member then downloads the files of their assigned projects to their workstation. Depending on their roles and assigned tasks, they may want to open, visualize, analyze, test, modify, or review the project files and data using appropriate design tools.

The downloading of files from a version control repository or mirror is called “cloning” or “checking out.” The downloading process involves replicating all the files, all versions of each file, and metadata (like file naming, permissions, and version numbers) from the local mirror to a workspace on the team member’s workstation.

Additionally, all this data management should be convenient and performant for enabling contemporary practices like working from home, a client site, a partner site, or another remote location.

Challenges of managing large design files

The traditional system of design data management involves several data storage and transfer challenges outlined below.

High disk space consumption on local workstations

In a mature design project, each SoC, library, and PDK can have file sizes of several gigabytes (GBs). They consume considerable disk storage on each team member’s workstation.

With many design teams working from multiple sites, they place a high load on the enterprise’s networks. Overall, the storage, bandwidth, network speed, and other IT needs of the enterprise go up considerably.

Inefficient handling of large files

A key inefficiency of the traditional system is that every file is downloaded and stored in the user’s workspace, even if it’s several GBs. Optimized file management, like downloading them only when required, is not available.

Suboptimal management of binary files

Another challenge is the inefficient way traditional VCS handles binary files. As we discussed, IC design projects involve several binary format files, such as GDSII, OASIS, and NCF, some of which may be several GBs.

VCS like Git and SVN have the following problems with binary files:

  • Downloads: Whether required or not, the files are downloaded in their entirety.
  • Storage: Storage optimizations like delta compression (i.e., storing only the changes or deltas between versions) are not optimized for binary files. Some VCS plugins like Git large file storage (LFS) address this to an extent but also introduce additional problems like potential invalidation of references when the commit history is rewritten, deployment and management of separate hosting, limited differencing of binary files, and limited LFS support in third-party tools.
  • Differences: Since VCS are not aware of the internal formats of binary files, they can’t provide convenient user experiences (UX) for tasks like merging and conflict resolution.
  • Updates and syncs: Similarly, VCS can’t efficiently update these files because it isn’t aware of their internal formats. For example, binary formats often include record length fields and checksums that must be correctly updated whenever the data changes.
  • Uploads: When the files are changed either manually or automatically by EDA tools, ideally, only the changes must be transferred over the network. While SVN does this to a certain extent, Git does not. Typically, minor changes cannot be incrementally updated, and the entire binary file must be resubmitted to the central repo.

All these problems result in several adverse consequences, such as lower productivity, high storage consumption, network congestion, employee frustration, tools with poor UX designs, and more.

Lack of awareness of file dependencies

A severe shortcoming of traditional data management is that each file is treated as an independent unit, ignoring the fact that changes to one file may require changes to, or examination of, other files that are affected.

This approach to file organization is not suitable at all for semiconductor designs where changes to information in one file must trigger a variety of changes, verifications, simulations, and other actions in a dependency-aware fashion. For example, a change in an IP library must bubble up through all the ICs, subsystems, and projects that depend on it, even if it’s a minor component.

Such complex dependency management can’t be implemented easily using VCS features like triggers. For example, Git hooks come with challenges like:

  • the lack of built-in dependency information
  • the complexity of cross-repository triggering that necessitates external orchestration
  • the scalability and performance overheads as interdependencies grow
  • the difficulties in access control due to a lack of built-in security and permissions management

Storage and network demands of continuous integration

Continuous integration (CI) is a software development best practice where team members frequently integrate their work, followed by automated testing to detect errors early.

In IC and SoC design, CI encompasses both software and hardware development stages, including RTL simulations, synthesis, simulations using device models, and hardware/software co-verification. These integrations and tests are conducted frequently — often multiple times a day. Tools like Jenkins orchestrate the continuous integration by running automated tests and simulations to ensure system integrity and functionality.

CI workflows create significant storage and network demands. Key sources of storage and network consumption include:

  • version control operations, including frequent cloning and merging of repositories with large design files
  • extensive generation of test and verification data, including logs, waveforms, RTL simulation outputs, and results from automated device model simulations
  • creation of intermediate and final artifacts from the building stages
  • generation of test and build reports

High storage and network consumption by repos and mirrors

The above inefficiencies not only affect workstations but also exponentially increase the storage and network needs of the top-level repos and mirrors.

Unsecured storage and network

VCS and retrofit tools generally don’t ensure good data security when storing or transferring data. This is risky since semiconductor IC designs are valuable to a business and used in sensitive industries like defense and aerospace.

Hindered collaboration

Seamless collaboration between design teams and between design centers is hindered in the following ways:

  • Since syncing is inefficient, a team may start working on an older version of an IP without downloading the latest changes made by other sites. This may lead to subtle incompatibilities or faulty merging later on.
  • There are no easy ways to discuss and collaborate immediately because the design management tools are distinct from the collaboration tools.
  • It’s difficult to create associations between a discussion in a collaboration tool and the relevant files in a VCS. This lack of contextual information is a loss to current team members and the long-term organizational memory about the design.
  • Good traceability, as required by standards like the International Organization for Standardization (ISO) 26262 for automotive functional safety, includes the ability to trace changes to discussions and decisions. But this isn’t easy when using retrofit VCS for semiconductor design data management.
  • The inefficient handling of large design files and resulting latencies may frustrate team members, lower team morale, and hinder their ability to address problems and bugs quickly.

How Keysight SOS solves storage and retrieval challenges

A screenshot of a computer Description automatically generated

Fig 3. Keysight SOS

SOS is Keysight’s purpose-built solution for optimized semiconductor design data management. It addresses all the data storage and network challenges described above to achieve more efficient use of infrastructure. These improvements are explained below.

Cache servers for performant data access

SOS deploys special cache servers between the workstations and the central repositories. They facilitate the following improvements:

  • File contents are not automatically downloaded to a workstation until the user or a tool explicitly requests them.
  • All files are only downloaded to the cache servers.
  • All the files on workstations are just lightweight symbolic links, as explained in the next section.

A diagram of a computer network Description automatically generated

Fig 4. Symbolic links to files in SOS cache servers

All the files on user workstations are just symbolic links to respective files on the cache servers. This prevents unnecessary duplication of data and reduces storage and network consumption by an order of magnitude.

For example, if there’s a 1 GB file and 10 workstations, the traditional system would require at least 11 GB in total — 1 GB each in the repository and the 10 workstations. But now, only one GB is consumed by the cache server, while the 10 workstations together consume only a few kilobytes for the symbolic links.

The design tools on the workstations are not affected because symbolic links look like regular files to them with the correct metadata. A file is downloaded from the cache server only when its contents are required.

Since most users don’t touch the majority of files, libraries, and PDKs in the project, this approach is very storage- and network-efficient and drastically reduces latencies.

Optimized access to top-level libraries and PDKs

In addition to symbolic links for files, SOS implements another higher layer of optimization called sparse populate that takes advantage of the unique folder structures of semiconductor design projects.

Most users will never touch most of the directories containing IP core libraries and PDKs. So, SOS creates symbolic links only to their top-level folders, not to each file under them.

This is particularly useful for complex IP libraries like processor cores that may hold hundreds of files that will never be read or changed by the user or any EDA tool. Not having to create hundreds of symbolic links for those files further reduces latencies and network transfers.

Auto synchronization between sites

A map of the world with connected lines and dots Description automatically generated

Fig 5. Auto-synchronize cache servers for consistent versions across all design centers

SOS does auto synchronization between cache servers of different sites so that all the design teams anywhere in the world see the same versions. This cuts down the chances of incompatibilities creeping in due to teams working on different versions before merging their changes.

SOS also periodically scrubs unused files out of the cache servers to reduce the amount of storage consumed.

Secure data storage and transfer

All the stored data and network transfers in the SOS ecosystem are encrypted to ensure the robust cybersecurity necessary for sensitive semiconductor projects.

How Keysight solutions optimize collaboration between design teams

Apart from the above optimizations, SOS works in lockstep with another application called Keysight HUB for IP management and reuse. Working together, these two applications streamline many more collaboration challenges, as explained below.

Change notifications

While SOS manages the design data, HUB manages IP, understands IP dependency hierarchies, and facilitates IP reuse. Using its enterprise-wide IP catalog, HUB can determine design dependencies based on the bill of materials or specified clients. When a file is changed in SOS, HUB can notify all the dependent ICs and projects.

Efficient workflows

HUB and SOS trigger automated workflows like:

  • notifying all affected components and project owners to review changes
  • analyzing conflicts in files
  • approving releases once all the automated verifications have passed
  • gathering metrics on verification tests, simulation results, and more
  • generation of reports for project management and other stakeholders

Built-in collaboration

A screenshot of a computer Description automatically generated

Fig 6. Collaboration in Keysight HUB

HUB provides built-in collaboration tools to facilitate discussions while maintaining associations between those discussions and the relevant projects, files, and versions. This facilitates traceability, decision-making, institutional knowledge sharing, and organizational memory for the designs.

Graphical tools

A computer screen shot of a computer screen Description automatically generated

Fig 7. Keysight visual design diff

In addition to graphical EDA tools, Keysight offers many more graphic design tools that facilitate convenient design data workflows.

For example, the Keysight visual design diff (VDD) tool has an excellent UI design that enables engineers to visualize graphical schematic diagrams or physical design layouts. This enables accurate understanding and proper merging of changes made by different teams.

Efficiently manage your complex semiconductor designs with Keysight solutions

This blog explains the challenges your semiconductor design teams are facing or likely to face in their design work. Whether you’re an experienced semiconductor company or a hardware startup pursuing new IC initiatives, our industry-leading design data and IP management solutions like Keysight SOS and Keysight HUB can solve these challenges and boost your organization’s design productivity.

Want help or have questions?

Contact us



Source link

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles