We humans have a biological predisposition to steer clear of danger and life-threatening situations, safety and especially a sense of security lies at the heart of our core psychological needs. For centuries we have been able to rely on this and current safety legislation is centered on the idea that a human operating a system would not intentionally bring harm to himself, others, or the environment he operates in. As such manufacturers are required to adhere to current safety legislation, and directives in order to build a level of trust and reliability in a machine ensuring it is functionally safe. Advances in artificial intelligence offer exciting possibilities for the future and justify its practical implementation. As human control is slowly replaced by artificial intelligence, new challenges for safety arise.
In practice, safety means that there is trust that a system performs its function reliably. Conventionally the human operator establishes safety in the working environment in the way that the system is controlled. Such as stopping the movement of a machine if a human comes dangerously close, as control of the machine moves to an artificial intelligence assurance must be in place that enforces the safety of humans in the vicinity of the system.
A safe manufactured machine means that we can trust in the arguments made for its safety, supporting this trust means that we adhere to existing legislation and manufacturing directives. These conventional directives are not ready for control systems operating a machine in high collaboration with humans. Epec is the control system expert for mobile heavy machinery, as control of the machines is more and more coming from artificial intelligence new challenges for safety arise.
Epec recognizes this and together with them we focus on being on the frontlines of ensuring safety of our machine operators and those the machine shares its working environment with. Therefore, the goal is to go beyond conventional safety certification limitations in relation to artificial intelligence and develop a systematic framework that will support the development of safety certifiable collaborative autonomous mobile machine safety control systems.
EPEC
We humans have a biological predisposition to steer clear of danger and life-threatening situations, safety and especially a sense of security lies at the heart of our core psychological needs. For centuries we have been able to rely on this and current safety legislation is centered on the idea that a human operating a system would not intentionally bring harm to himself, others, or the environment he operates in. As such manufacturers are required to adhere to current safety legislation, and directives in order to build a level of trust and reliability in a machine ensuring it is functionally safe. Advances in artificial intelligence offer exciting possibilities for the future and justify its practical implementation. As human control is slowly replaced by artificial intelligence, new challenges for safety arise.
In practice, safety means that there is trust that a system performs its function reliably. Conventionally the human operator establishes safety in the working environment in the way that the system is controlled. Such as stopping the movement of a machine if a human comes dangerously close, as control of the machine moves to an artificial intelligence assurance must be in place that enforces the safety of humans in the vicinity of the system.
A safe manufactured machine means that we can trust in the arguments made for its safety, supporting this trust means that we adhere to existing legislation and manufacturing directives. These conventional directives are not ready for control systems operating a machine in high collaboration with humans. Epec is the control system expert for mobile heavy machinery, as control of the machines is more and more coming from artificial intelligence new challenges for safety arise.
Epec recognizes this and together with them we focus on being on the frontlines of ensuring safety of our machine operators and those the machine shares its working environment with. Therefore, the goal is to go beyond conventional safety certification limitations in relation to artificial intelligence and develop a systematic framework that will support the development of safety certifiable collaborative autonomous mobile machine safety control systems.