Current projects:

Past projects:

    The complexity of modern day electronic systems combined with nano-scale non-idealities have made "post-silicon" validation significantly cumbersome. At this stage few fabricated chips are verified for correct functionality under different workload and operating conditions in order to detect and fix the bugs which have escaped the design stage. The process has become time-consuming and expensive due to the costs of equipments and of increasing silicon re-spins, the use of manual techniques, and the complicated nature of bugs in nano-scale technologies. Consequently, time-to-market and profit are directly at stake.

    The major challenge at the post-silicon stage is lack of observability and limitation to an interface of at most a few thousand pins to reason about billions of nano-scale components. A bug needs to be detected, localized in time and space, ideally at path-level resolution, within a time window of few clock cycles. It may not always be possible to regenerate a detected bug because the bugs may be inter-dependent. In addition, the root-cause of a bug, such as static manufacturing variations or transient ground and power supply fluctuations, need to be identified for more effective detection and analysis of the bugs.

    The most challenging type of bug at the post-silicon stage can perhaps be considered to be timing errors. Timing errors refer to those malfunctions which manifest in the form of setup and hold time violations on logic. Although small in number, timing errors can take the majority of the post-silicon validation cycle. Identifying timing errors is crucial to the majority of the domains of the electronic industry including microprocessors, System on Chips integrating Intellectual Properties, and ASICs, in which meeting a target frequency is a necessity. The objective of this research is to bring automation to the debug process of timing errors. Our vision is to rely on few on-chip measurements to bring valuable information about the internal timing characteristics of the chip. Our goal, on one hand, is identification of on-chip measurement sites (as existing circuit components), and design of new structures (to be embedded on-chip) which enhance the "timing observability". On the other hand, at the post-silicon stage, we aim to develop analysis methods to reason about the chipís internal timing behavior by utilizing the measurements.

    Related Publications:

    The high volume and complexity of cells and interconnect structures in modern designs are causing serious challenges to routability. Rapid congestion analysis (which may be defined as quick identification of the congestion spots on the layout) can help the routability problem relatively early, for example during placement and in conjunction with global routing. Increasing the correlation between global routing and detailed routing is another major challenge to routability.

    In modern designs, several new factors contribute to routing congestion including significantly different wire size and spacing among the metal layers, sizes of inter-layer vias, various forms of routing blockages (e.g., reserved for power-grid, clock network, or IP blocks in an SoC), local congestion due to pin density and wiring inside a global-cell, and virtual pins located at the higher metal layers. In view of the above, the objective of this research is to develop techniques for congestion analysis at the global routing stage, for rapid analysis, and for increasing the correlation with detailed routing.

    Related Publications:

    Design of today's electronic systems would not be possible without the tools that automate the process of integrating billions of nano-scale components--e.g. into the "brain" of an iPhone. As technology advances towards mobile devices that are smaller yet more powerful, these tools need to evolve as fast as the systems that they help design--in fact faster, because the nano-scale components not only grow in numbers but also shrink in size, bringing along with them new challenges.

    To improve existing design-aid tools, a new window of opportunity has arisen due to the emergence of a more powerful yet affordable and secure computational platform: a cloud of multi-core computers working together as if it were one enormous machine. By leveraging this cloud computing platform the proposed research investigates alternative design automation strategies which traditionally were deemed to be too time-consuming.

    One focus of the research is to improve an important step of the design process known as global routing, the step in which designers plan how the billions of nano-scale components will be interconnected on the chip. This planning can significantly impact the severity of many issues in subsequent stages of the design cycle, yet it has to be done quickly. With the aid of large-scale parallelism provided by computational grids, the research aims to demonstrate that the use of a computational technique called integer programming, which was previously viewed as too time-consuming for global routing, can help generate significantly higher quality solutions while meeting practical runtime requirements.

    Related Publications:

    In today's technology, a single IC (e.g., the part of an iPhone that serves as the "brain") integrates billions of nano-scale components. These components are packed in a tiny space, yet provide amazingly diverse functionalities (multimedia, internet, phone) with high-speed, while consuming very low power. As the components continue to shrink in size, however, current technology faces steeper challenges in delivering the products of the next generation.

    A major challenge stems from the imperfections in IC fabrication: smaller components are less tolerant to these imperfections; their performance may turn out so poor that the IC may have to be entirely redesigned in order to meet the performance specs. This means delayed delivery of the final product, which means major loss of market opportunities.

    This research aims to develop a mathematical framework for producing IC designs that are robust with respect to fabrication imperfections. A highlighting feature of this framework is that it requires only a limited knowledge of the manufacturing process--e.g., rather than relying on detailed information on manufacturing inaccuracies, it suffices in this framework to know only some bounds on the degree of imperfections. This is important since the designers often don't have access to detailed data on these inaccuracies -- they may not exist, or, third-party manufacturers may not release them. So the goal is to have a framework which is not only robust with respect to manufacturing errors, but also less dependent on the knowledge of them.

    Related Publications: