arm trustzone monitor mode switch design

2020-04-21 02:43发布

the basic world switch flow is:

set FIQ to monitor mode

  1. normal world -> FIQ triggered
    1. -> enter monitor mode (do switch to Secure world, restore Secure world context)
    2. -> in Secure world sys mode
    3. -> FIQ is not clear, enter FIQ handler in Secure world

step3 and step 4, after we restore the target context, arm will trigger the exception to enter the exception is the behavior correct? (if we dont branch to FIQ handle in monitor mode vector table)

we need flow like below: (no world context switch case, just enter monitor mode to check if we need world switch, and enter irq exception from monitor mode directly. we need this because of our hw limitation, we only have IRQ in our chip)

set IRQ to monitor mode

  1. normal world user mode -> IRQ triggered
    1. -> enter monitor, do something we want to hook, check if we need context switch, prepare some spsr/lr for IRQ mode
  2. -> enter normal world IRQ mode, irq handling
  3. -> irq done, return back to user mode

for non-world switch case, we would like to let the normal world os does not know about the monitor mode, just though he enters the irq mode directly and return from irq mode. for world switch case, just switch it in the monitor mode.

or it's just do the irq_handle in the monitor mode?

eq.
normal world OS usr mode -> irq -> usr mode
normal world OS usr mode -> monitor to irq handler -> usr mode

is the flow possible and well design?

1条回答
不美不萌又怎样
2楼-- · 2020-04-21 03:03

is the flow possible and well design?

It is possible. 'well designed' is subjective. It has several fails or non-ideal issues. I guess your system doesn't have a GIC; which is a trustzone aware interrupt controller. The GIC has banked registers which allow the normal world OS to use it (almost) as if it was in the secure world.

It is not clear from you question whether you want the secure world to have interrupts? I guess from the statement 'for non-world switch case...'. If you only have interrupts handled by the normal world, things are simple. Don't branch to monitor mode on an IRQ (or FIQ). There is a register to set this behaviour (SCR/security configuration register).

For the dual world interrupt case, you have two issues.

  1. You need to trust the normal world OS.
  2. Interrupt latency will be increased.

You must always take the interrupt in monitor mode. The monitor must check the interrupt controller source to see what world the interrupt belongs to. It may need to do a world switch depending on the world. This will increase interrupt latency. As well, both the normal and secure world will be dealing with the same interrupt controller registers. So you have malicious security concerns and non-malicious race conditions with multiple interrupt drivers trying to manipulate registers (RMW). Generally, if your chip doesn't have a GIC, but the CPU supports TrustZone, the your system hasn't been well thought through for TrustZone use. The L1/L2 cache controllers must also be TrustZone aware and you possible have issue there as well.

If you have Linux (or some other open source OS in the normal world), it would be better to replace the normal world interrupt driver with a 'virtual' interrupt driver. The normal world virtual IRQ code would use the SMC instruction to set virtual registers and register IRQ routines for specific interrupts. The secure world/monitor IRQ code would then branch directly to the decoded IRQ routine.


With a GIC, set the group 0 (secure world) interrupts as FIQ and group 1 (normal world) as IRQ using the GICC_CTLR bit FIQEnb. Ie, you classify the interrupts with the DIST in the GIC to be either secure or normal (and therefore FIQ/IRQ).

You have to work through scheduling issues and how you want the different OS's to pre-empt. Normally (easiest) is to always have the secure OS running, but this means that some Linux (normal world) interrupts may be very delayed by the secure world (RTOS) main line code.

查看更多
登录 后发表回答