When it comes to compliance, labelling everything as AI might be a bad idea.
For example, engineering has traditionally relied on algorithms, statistical analysis, models, and prediction, and this practice should continue without any confusion with AI.
However, AI does have unique characteristics that, if not understood, could pose significant risks to the designs and intended outcomes of the solutions developed.
Nevertheless, labelling all of this as AI might unnecessarily create regulatory uncertainty and complexity with obligations that are already handled by existing practice guidelines and standards.
The need for defining AI is indeed crucial, not only to separate the boundaries of where new risks not currently addressed are being introduced, but also to ensure that you are not inadvertently creating legal, regulatory, or ethical exposure for yourself.