Smarter AI means greater risks – why railings have more than ever

🚀Invest in Your Future Now🚀

Enjoy massive discounts on top courses in Digital Marketing, Programming, Business, Graphic Design, and AI! For a limited time, unlock the top 10 courses for just $10 or less—start learning today!!

1741908102 AI powered conversational search concept

AI agents change the game for companies in general and marketing in particular. Although they carry automation and efficiency to a new level, they require strong governance. As AI agents become smarter and more integrated in daily operations, ensuring that they are used in a responsible, firmly and ethical manner is not only a good in Have – it is a must.

“The question is the same, whether it is a generative AI, a traditional AI, an automatic learning or an agent,” said Kavitha Chennupati, principal director of world products at SS & C Blue Prism. “Does the LLM refers the appropriate answer or not?”

The very things that make AI useful – its ability to analyze large amounts of data and put on the scale and personalize customer experiences, to name only two – increase the issues if it does something wrong.

“The impact of not having good quality answers and good governance is a magnitude above what it was before,” said Man Gill, vice-president of the product in Boomi.ai. “The extent of an agent requiring these data is much more than a human asking for this data. It multiplies by thousands and thousands. »»

This is why you must have AI railings. Because marketing opens the way to the adoption of AI, marketing specialists must know what railing is and how to develop them.

The first thing to know is that you don’t start by working on AI. You start with people who decide what the rules are.

“We are going with the philosophy approach to governance,” said Chennupati. “Make the bases before starting to incorporate technology.”

Any organization that implements AI must first create a governance council. The council consists of people of different commercial functions which define the policy of AI for everything, of the brand rules to the data to which it can access when people need to intervene and beyond.

Establish railings: Department of Autonomous Actions

Boomi AI Studio incorporates “integrated ethical railing” into its design environment. These are intended to guide the development and deployment of agents to responsible actions. Beyond the characteristics specific to the platform, Chennupati describes several key mechanisms to establish railings, in particular:

  • Reference to decisions of sources of trust: Oblige the agents to justify their actions by quoting the data or the logic on which they rely.
  • Similarity -based checks: Use several AI models to perform the same task and compare their outputs to identify potential differences or errors.
  • Contradictory tests: Intended agents intentionally difficult with incorrect or misleading information during tests to assess their resilience and their border membership.

These help to guarantee that agents act effectively and reason deeply, all in acceptable parameters.

Dig more deeply: Are synthetic audiences the future of marketing tests?

Secure keys: data access and control

One of the main concerns in AI governance stops around data security and access control. It is best to implement the same access security based on the roles that you must already use for humans.

“Here is a case of use of typical agent: would it not be great if we allow our employees to use information on themselves and their teams?” said Gill. “Now it is very easy to connect this information in your human capital management system to your human resources system. Now, if this security policy is not good, all of a sudden, the CEO’s salary appears in this agent. »»

This also applies to AI models outside the direct control of the organization.

“An agentForce agent works on a model that you do not control,” said Chennupati. You can’t just pretend to read terms and conditions as most of us do with technology. “You must understand the aspects of data confidentiality.”

Vigilance constant

The various confidentiality laws mean that you should know where the data live and where it can be transmitted. Otherwise, you risk large fines and penalties. You must also have a mechanism to stay up to date on changes in laws and regulations.

One of the things that makes IA so precious is its ability to learn and apply these learning. However, this means that you must permanently monitor the AI ​​to see if it always follows the rules. Fortunately, you can use AI to monitor AI. Separate systems check anomalies to identify when the behavior of an agent departs from expected standards.

However, you cannot leave everything at AI. Gill and Chennupati underline the continuous need for human intervention.

“It is not only a question of monitoring, but also of defining the threshold for the measures in terms of when you want to bring people in the loop,” said Chennupati. “He starts in the design phase. The design must include details on how the LLM arrives at a solution so that a human can see what is going on. »»

The AI ​​evolves at a breathtaking speed and becomes more and more entangled with all parts of commercial operations. We can now do what has taken days, or more weeks in a few seconds. In addition to this great power comes – say it with me – a great responsibility. As saying, err is human; To really spoil, you need a computer.

Dig more deeply: Salesforce and Microsoft are characterized with new AI sales agents

👑 #MR_HEKA 👑

AI agents change the game for companies in general and marketing in particular. Although they carry automation and efficiency to a new level, they require strong governance. As AI agents become smarter and more integrated in daily operations, ensuring that they are used in a responsible, firmly and ethical manner is not only a good in Have – it is a must.

“The question is the same, whether it is a generative AI, a traditional AI, an automatic learning or an agent,” said Kavitha Chennupati, principal director of world products at SS & C Blue Prism. “Does the LLM refers the appropriate answer or not?”

The very things that make AI useful – its ability to analyze large amounts of data and put on the scale and personalize customer experiences, to name only two – increase the issues if it does something wrong.

“The impact of not having good quality answers and good governance is a magnitude above what it was before,” said Man Gill, vice-president of the product in Boomi.ai. “The extent of an agent requiring these data is much more than a human asking for this data. It multiplies by thousands and thousands. »»

This is why you must have AI railings. Because marketing opens the way to the adoption of AI, marketing specialists must know what railing is and how to develop them.

The first thing to know is that you don’t start by working on AI. You start with people who decide what the rules are.

“We are going with the philosophy approach to governance,” said Chennupati. “Make the bases before starting to incorporate technology.”

Any organization that implements AI must first create a governance council. The council consists of people of different commercial functions which define the policy of AI for everything, of the brand rules to the data to which it can access when people need to intervene and beyond.

Establish railings: Department of Autonomous Actions

Boomi AI Studio incorporates “integrated ethical railing” into its design environment. These are intended to guide the development and deployment of agents to responsible actions. Beyond the characteristics specific to the platform, Chennupati describes several key mechanisms to establish railings, in particular:

  • Reference to decisions of sources of trust: Oblige the agents to justify their actions by quoting the data or the logic on which they rely.
  • Similarity -based checks: Use several AI models to perform the same task and compare their outputs to identify potential differences or errors.
  • Contradictory tests: Intended agents intentionally difficult with incorrect or misleading information during tests to assess their resilience and their border membership.

These help to guarantee that agents act effectively and reason deeply, all in acceptable parameters.

Dig more deeply: Are synthetic audiences the future of marketing tests?

Secure keys: data access and control

One of the main concerns in AI governance stops around data security and access control. It is best to implement the same access security based on the roles that you must already use for humans.

“Here is a case of use of typical agent: would it not be great if we allow our employees to use information on themselves and their teams?” said Gill. “Now it is very easy to connect this information in your human capital management system to your human resources system. Now, if this security policy is not good, all of a sudden, the CEO’s salary appears in this agent. »»

This also applies to AI models outside the direct control of the organization.

“An agentForce agent works on a model that you do not control,” said Chennupati. You can’t just pretend to read terms and conditions as most of us do with technology. “You must understand the aspects of data confidentiality.”

Vigilance constant

The various confidentiality laws mean that you should know where the data live and where it can be transmitted. Otherwise, you risk large fines and penalties. You must also have a mechanism to stay up to date on changes in laws and regulations.

One of the things that makes IA so precious is its ability to learn and apply these learning. However, this means that you must permanently monitor the AI ​​to see if it always follows the rules. Fortunately, you can use AI to monitor AI. Separate systems check anomalies to identify when the behavior of an agent departs from expected standards.

However, you cannot leave everything at AI. Gill and Chennupati underline the continuous need for human intervention.

“It is not only a question of monitoring, but also of defining the threshold for the measures in terms of when you want to bring people in the loop,” said Chennupati. “He starts in the design phase. The design must include details on how the LLM arrives at a solution so that a human can see what is going on. »»

The AI ​​evolves at a breathtaking speed and becomes more and more entangled with all parts of commercial operations. We can now do what has taken days, or more weeks in a few seconds. In addition to this great power comes – say it with me – a great responsibility. As saying, err is human; To really spoil, you need a computer.

Dig more deeply: Salesforce and Microsoft are characterized with new AI sales agents

👑 #MR_HEKA 👑

AI agents change the game for companies in general and marketing in particular. Although they carry automation and efficiency to a new level, they require strong governance. As AI agents become smarter and more integrated in daily operations, ensuring that they are used in a responsible, firmly and ethical manner is not only a good in Have – it is a must.

“The question is the same, whether it is a generative AI, a traditional AI, an automatic learning or an agent,” said Kavitha Chennupati, principal director of world products at SS & C Blue Prism. “Does the LLM refers the appropriate answer or not?”

The very things that make AI useful – its ability to analyze large amounts of data and put on the scale and personalize customer experiences, to name only two – increase the issues if it does something wrong.

“The impact of not having good quality answers and good governance is a magnitude above what it was before,” said Man Gill, vice-president of the product in Boomi.ai. “The extent of an agent requiring these data is much more than a human asking for this data. It multiplies by thousands and thousands. »»

This is why you must have AI railings. Because marketing opens the way to the adoption of AI, marketing specialists must know what railing is and how to develop them.

The first thing to know is that you don’t start by working on AI. You start with people who decide what the rules are.

“We are going with the philosophy approach to governance,” said Chennupati. “Make the bases before starting to incorporate technology.”

Any organization that implements AI must first create a governance council. The council consists of people of different commercial functions which define the policy of AI for everything, of the brand rules to the data to which it can access when people need to intervene and beyond.

Establish railings: Department of Autonomous Actions

Boomi AI Studio incorporates “integrated ethical railing” into its design environment. These are intended to guide the development and deployment of agents to responsible actions. Beyond the characteristics specific to the platform, Chennupati describes several key mechanisms to establish railings, in particular:

  • Reference to decisions of sources of trust: Oblige the agents to justify their actions by quoting the data or the logic on which they rely.
  • Similarity -based checks: Use several AI models to perform the same task and compare their outputs to identify potential differences or errors.
  • Contradictory tests: Intended agents intentionally difficult with incorrect or misleading information during tests to assess their resilience and their border membership.

These help to guarantee that agents act effectively and reason deeply, all in acceptable parameters.

Dig more deeply: Are synthetic audiences the future of marketing tests?

Secure keys: data access and control

One of the main concerns in AI governance stops around data security and access control. It is best to implement the same access security based on the roles that you must already use for humans.

“Here is a case of use of typical agent: would it not be great if we allow our employees to use information on themselves and their teams?” said Gill. “Now it is very easy to connect this information in your human capital management system to your human resources system. Now, if this security policy is not good, all of a sudden, the CEO’s salary appears in this agent. »»

This also applies to AI models outside the direct control of the organization.

“An agentForce agent works on a model that you do not control,” said Chennupati. You can’t just pretend to read terms and conditions as most of us do with technology. “You must understand the aspects of data confidentiality.”

Vigilance constant

The various confidentiality laws mean that you should know where the data live and where it can be transmitted. Otherwise, you risk large fines and penalties. You must also have a mechanism to stay up to date on changes in laws and regulations.

One of the things that makes IA so precious is its ability to learn and apply these learning. However, this means that you must permanently monitor the AI ​​to see if it always follows the rules. Fortunately, you can use AI to monitor AI. Separate systems check anomalies to identify when the behavior of an agent departs from expected standards.

However, you cannot leave everything at AI. Gill and Chennupati underline the continuous need for human intervention.

“It is not only a question of monitoring, but also of defining the threshold for the measures in terms of when you want to bring people in the loop,” said Chennupati. “He starts in the design phase. The design must include details on how the LLM arrives at a solution so that a human can see what is going on. »»

The AI ​​evolves at a breathtaking speed and becomes more and more entangled with all parts of commercial operations. We can now do what has taken days, or more weeks in a few seconds. In addition to this great power comes – say it with me – a great responsibility. As saying, err is human; To really spoil, you need a computer.

Dig more deeply: Salesforce and Microsoft are characterized with new AI sales agents

👑 #MR_HEKA 👑

100%

☝️خد اخر كلمة من اخر سطر في المقال وجمعها☝️
خدها كوبي فقط وضعها في المكان المناسب في القوسين بترتيب المهام لتجميع الجملة الاخيرة بشكل صحيح لإرسال لك 25 الف مشاهدة لاي فيديو تيك توك بدون اي مشاكل اذا كنت لا تعرف كيف تجمع الكلام وتقدمة بشكل صحيح للمراجعة شاهد الفيديو لشرح عمل المهام من هنا