Suppose one has an entity which has two architectures defined. Those two architectures work with the same entity (obviously) and subsequently the two set the output pins to different values. My question is, how does the program (simulator) determine what the output should be (i.e. which architecture to choose)?
Here is an example:
library ieee;
use ieee.std_logic_1164.all;
entity Exercise_4 is
generic (n : integer := 4);
port(
a, b : std_logic_vector (n-1 downto 0);
clk, rst : std_logic;
q, qn : buffer std_logic_vector (n-1 downto 0));
end;
architecture one of Exercise_4 is
begin
process (clk, rst)
begin
if rst = '0' then
q <= (others=>'0');
elsif (clk' event and clk = '0') then
q <= a ;
end if;
end process;
process (clk, rst)
begin
if rst = '0' then
qn <= (others=>'1');
elsif (clk' event and clk = '0') then
for i in a'range loop
qn(i) <= not q(i) ;
end loop;
end if;
end process;
end;
architecture two of Exercise_4 is
begin
process (clk,rst)
begin
if rst = '0' then
q <= (others=>'0');
qn <= (others=>'0');
elsif (clk' event and clk = '0') then
q <= a;
qn <= b ;
end if;
end process;
end;
I did a simulation and saw that q gets the value of a assigned and qn gets the value of b assigned. It seems that the second architecture has been chosen by the compiler I don't understand why the program decided to do so.
Thank you.