What are the best practices for Hardware Descripti

2019-03-07 09:43发布

What best practices should be observed when implementing HDL code?

What are the commonalities and differences when compared to more common software development fields?

标签: verilog vhdl hdl
6条回答
贪生不怕死
2楼-- · 2019-03-07 09:56
  • in HDL, some parts of the code can work at the same time, for example two lines of code "can work" at the same time, this is an advantage, to use wisely. this is something that a programmer who is accustomed to line by line languages may find hard to grasp at first:

    • Long and specific for your needs pipelines can be created.
    • You can make your big modules work at the same time.
    • instead of one unit to do a repeated action on different data, you can create several units, and do the work in parallel.
  • Special attention should be given to the booting process - once your chip is functional, you have made a huge way.

Debugging on hardware is usually much harder than debugging software so:

  • Simple code is preferred, sometimes there are other ways to speed-up your code, after it is already running, for example using an higher speed chip, etc'.

  • Avoid "smart" protocols between components.

  • A working code in HDL is more precious than on other software, as hardware is so hard to debug, so reuse, and also consider using "libraries" of modules which some are free and others sold.

  • designing should consider not only bugs in the HDL code, but also failures on the chip you are programming, and on other hardware devices that interface with the chip, so one should really think about a design that is easy to check.

Some debugging tips:

  • If a design includes several building blocks, one would probably want to create lines from the interfaces between those blocks to testing points outside the chip.

  • You will want to save enough lines in your design to divert interesting data to be inspected with external devices. also you can use this lines, and your code as a way of telling you the current state of execution - for example if you receive data at some point, you write some value to the lines, at a later stage of execution you write another value, etc'

    If your chip is reconfigurable this will become even more handy, as you can tailor specific tests, and reprogram the outputs for each test as you go (this looks very well with leds :). )

Edit:

By smart protocols, I've meant that should two of your physical units connect, they should communicate with the simplest communication protocol available. that is, do not use any sophisticated home-made protocols, between them.

The reason, is this - Fidning bugs "inside" an FPGA/ASIC is reletivly easy as you have simulators. So if you are sure that data comes as you want it, and goes out as your program sends it, you've reached Hardware utopia - being able to work at software level :) (with the simulator). But if your data doesn't get to you, the way you want it to, and you have to figure out why... you'll have to connect to the lines, and that's not that easy.

Finding a bug on the lines, is hard as you have to connect to the lines with special equipment, that record the states of the lines, on different times, and you'll have to make sure your lines act according to the protocol.

If you need to connect two of your physical units make the "protocol" as simple as it can , up to the point it won't be called a protocol :) For example if the units share a clock, add x data lines between them, and make one unit write those and the other unit read, thus passing one "word" which has x bits between them on each clock fall, for example. If you have FPGA's, should the original clock rate be too fast for parallel data - you can control the speed of this, according to your experiments, for example making the data stay on lines of at least 't' clock cycles etc'. I assume parallel data transfer is simpler, as you can work with at lower clock rates and get the same performances, without the need to split your words on one unit, and reassemble on the other. (hopefully there is no delay between the 'clock' each unit receives). Even this is probably too complex :)

Regarding SPI, I2C etc' I haven't implemented any of them, I can say that I've connected legs of two FPGA's running from the same clock, (don't remember the exact formation of resistors in the middle), at much higher rates, so I really can't think of a good reason to use those, as the main way to pass data between your own FPGA's, unless the FPGA's are located very far one from another, which is one reason to use a serial rather than a parallel bus.

JTAG is used by some FPGA companies, to test/program their products, but not sure if it's used as way to transport data at high speeds, and It is a protocol... (still one which may have some built-in on chip support).

If you do have to implement any known protocol, consider using a pre-made HDL code for this - which can be found or purchased.

查看更多
Explosion°爆炸
3楼-- · 2019-03-07 10:00

This is the question that requires JBDAVID's 10 commandments for Hardware design.

  1. Use Revision/Version Control, just like in Software. SVN and Hg are free.
  2. Require the code to pass syntax checking before check-in. A LINT tool is better.
  3. Use a full-strength Hardware Verification Language for design Verification. System-Verilog is nearly a safe choice.
  4. Track Bugs. Bugzilla and GNATS are free tools. FogBugz requires a little $.
  5. Use Assertions to catch issues with incorrect use.
  6. The Coverage Triad makes for a stable design: Measure Code coverage, Functional coverage and Assertion coverage in both simulation and formal tools.
  7. Power is King: use CPF or UPF to capture, enforce and verify your Power-Intent.
  8. the real design is often mixed signal, Use a Mixed-Signal language to verify the analog with the digital. Verilog-AMS is one such solution. But don't go overboard. Realnumber modeling can accomplish most of the functional aspects of mixed-signal behavior.
  9. Use Hardware Acceleration to validate the Software that has to work with the silicon!
  10. Syntax Aware text editors for your HDL/HVL are a minimum requirement for developer IDE.
查看更多
狗以群分
4楼-- · 2019-03-07 10:02

HDL's like Verilog and VHDL really seem to encourage spaghetti code. Most modules consist of several 'always' (Verilog) or 'process' (VHDL) blocks that can be in any order. The overall algorithm or function of the module is often totally obscured. Figuring out how the code works (if you didn't write it) is a painful process.

A few years ago I came across this paper that outlines a more structured method for VHDL design. The basic idea is that each module has only 2 process blocks. One for combinatorial code, and other for synchronous (the registers). It is great for producing readable and maintainable code.

查看更多
Animai°情兽
5楼-- · 2019-03-07 10:06

The best book on this topic is Reuse Methodology Manual. It covers both VHDL and Verilog.

And in particular some issues that don't have an exact match in software:

  • No latches
  • Be careful with resets
  • Check your internal and external timing
  • Use only synthesizable code
  • Register your outputs of all modules
  • Be careful with blocking vs. non-blocking assignments
  • Be careful with sensitive lists for combinatorial logic (or use @(*) in Verilog)

Some that are the same include

  • Use CM
  • Have code reviews
  • Test (simulate) your code
  • Reuse code when appropriate
  • Have an up-to-date schedule
  • Have a spec or use cases or an Agile customer
查看更多
姐就是有狂的资本
6楼-- · 2019-03-07 10:12

For FPGAs, Xilinx has this page. Almost all would apply to other FPGA vendors, or would have equivalent rules. A great deal is applicable to ASIC designs.

Intel has Recommended HDL Coding Styles and Design Recommendations(PDF)under this page.

查看更多
Root(大扎)
7楼-- · 2019-03-07 10:16

Sort of an old thread, but wanted to put in my $0.02. This isn't really specific to Verilog/VHDL.. more on hardware design in general... specifically synthesizable design for custom ASICs.

This is my opinion based on years of industry (as opposed to academic) experience on design. They are in no particular order

My umbrella statement is to Design for validation execution. In hardware design, validation is paramount. Bugs are a lot more expensive when found in actual silicon. You can't just re-compile. Therefore, pre-silicon is given much more focus.

  • Know the difference between control paths and data paths. This enables you to create much more elegant and maintainable code. Also allows you to save gates and minimize X propagation. For instance, data paths should never need resettable flops, control paths should always need it.

  • Prove functionality before validation. Either through a formal approach or through waveforms. This has many advantages, I will explain 2. First, it will save you wasted time onion peeling through issues. Unlike lots of application level design (esp while learning) and most course work, the turn-around time for code changes is very large (anywhere from 10 minutes to days, depending on complexity). Every time you change code, you need to go through elaboration, lint checking, compiling, waveform bring-up, and finally actual simulation.. which can itself take hours. Second, you are much less likely to have difficult to hit corner cases. Note this is with respect to pre-silicon validation. These will surely hit at post-silicon costing you lots of $$$. Trust me, the up front cost of proving functionality greatly minimizes risk and is well worth the effort. This is sometimes difficult to convince recent college grads.

  • Have "chicken bits". Chicken bits are bits in MMIO set via the driver to disable a feature in silicon. It's intended to revert changes made in which confidence is not high (confidence is directly proportional to validation efforts). It is next to impossible to hit every possible state in pre-silicon. Confidence on your design cannot truly be met until it's proven in post-silicon. Even if there is only 1 state that is hit 0.000005% of the time that exposes the bug, it WILL HIT in post-silicon, but not necessarily in pre-silicon.

  • Avoid exceptions in the control path at all costs. Every new exception you have doubles your validation efforts. This one is hard to explain. Lets say there is a DMA block that will save out data to memory that another block will use. Lets say the data structure saved out is dependent on some function being done. If you decided to design such that the data structure saved was different between different functions, you just multiplied your validation efforts by the number of DMA functions. If this rule is followed, the data structure saved out would be a super-set of all data available for every function where the content locations are hardcoded. Once the DMA save logic is validated for 1 function its validated for all functions.

  • Minimize interfaces (read minimize control paths). This is related to minimizing exceptions. First, every new interface requires validation. This includes new checkers/trackers, assertions, coverage points, and bus functional models in your testbench. Secondly, it can increase your validation efforts exponentially! Lets say you have 1 interface for reading data in caches. Now lets say (for some odd reason) you decide you want another interface for reading main memory. You just quadrupled your validation efforts. You now need to validate these combinations at any given time n:

    • no cache read, no memory read
    • no cache read, memory read
    • cache read, no memory read
    • cache read, memory read
  • Understand and communicate assumptions. Lacking this is the main reason for block to block communication issues. You could have a perfect block fully validated.. however, without understanding all assumptions, your block will fail when its connected.

  • Minimize potential states. The less states (intended or unintended) a design has, the less effort required to validate. It's good practice to group like functions into 1 top level function (like sequencers and arbiters). It is very difficult to identify and define this high level function such that it encompasses as many smaller functions as possible, but in doing so you vastly eliminate state and in turn potential for bugs.

  • Always provide a strong signal leaving your block. Most of the time flopping it is the solution. You have no idea what the endpoint block(s) will do with it. You could run into timing issues which can have a direct impact on your perfect implementation.

  • Avoid mealy type FSMs unless performance is negatively impacted. Mealy FSMs are more likely to produce timing issues over Moore

  • .. and finally the one I dislike the most: "if it ain't broke, don't fix it" Because of the risk involved and the high cost of bugs, many times hacking is a more practical solution to solving problems. Others have eluded to this by mentioning utilization of existing components.

As for comparing against more traditional software design:

  • discrete event driven programming is a completely different paradigm. People see verilog syntax and think "oh, its just like C"... however, this cannot be further from the truth. Although the syntax is similar, one must think differently. For example, a traditional debugger is virtually meaningless on synthesizable RTL (Testbench design is different). Waveforms on paper are the best tool available. However, that being said, FSM design can at times mimic procedural programming. People with a software background tend to go crazy with FSMs (I know I did at first).

  • System Verilog has lots and lots (and lots) of testbench specific features. It is completely object oriented. As far as testbench design goes, its very similar to traditional software design. However, it does have 1 more dimension associated with it, that of time. race conditions and protocol delays must be accounted for

  • As for validation, it is also different (and the same). There are 3 main approaches;

    • Formal propagative verification (FPV): You prove through logic that it will always work
    • Directed random testing. Randomly set delays, input values, and feature enabling as defined by a seed. directed means that the seed puts weight on paths that have less confidence. This approach uses coverage points to indicate health
    • Focus testing. This is similar to traditional software testing

... for completeness, I need to also discuss best test-bench design practices... but that's for another day

Sorry for the length.. I was in "The Zone" :)

查看更多
登录 后发表回答