problem spaces like P and NP are defined by asymptotic behavior as some number n goes to infinity. There are no layman situations where going to infinity is useful, and the precision needed to define it that way is brutal.
As such, I find its easiest to describe P and NP as "fast" and "slow," and then use the practical example of cryptography because it's interesting to people. I start with NP (because there are fast and slow P algorithms, so I want to anchor their concept of "slow" to that of exponential time before bringing in P). Everyone has dealt with some sort of geometric growth somewhere, even if its just compound interest, so there's something to start from. Once I think they have some idea of exponential time, I introduce the cryptography link. One of the goals of cryptography is that adding 1 bit to a key doubles how long it takes someone to break it. The essential part is to connect the idea that adding work to the sender/receiver multiplies the amount of work needed by the attacker. With that, I can then bring up Moore's law, which very roughly says computing power doubles every 18 months. That means my attacker is able to do twice as much work if he can wait for the hardware to catch up. When his attacking capability doubles, I have to add a bit. Then he doubles again, and I add a bit. Then I can show just how asymmetric this game is -- every time they do a massive amount of extra work, I have to do just a little extra work to keep things even. Now I can teach polynomial time as algorithms that run faster than exponential time. This is a bit of a simplification, but for a layman I think it's okay. If my crypto algorithm were polynomial time, as my attacker's computational speed got better, I'd have to be adding bits quicker and quicker, bogging down the system. Everyone knows what it means when a computer runs slow! << https://cs.stackexchange.com/questions/81556/simple-non-mathematical-definit... Reposts not non-stop, mad rants about orthogonal issues of historical abuse