Challenges and Ethical Considerations of Implementing Generative AI in Manufacturing

Challenges and Ethical Considerations of Implementing Generative AI in Manufacturing

A system integrator’s recommendations on navigating the complex issues related to data privacy, ethical artificial intelligence use and workforce training when it comes to generative AI applications.

By Luigi De Bernardini

Generative AI represents a transformative force for industry. It can revolutionize product design, optimize production processes and enhance maintenance strategies. However, integrating generative AI into manufacturing systems is not without its challenges.
A primary issue is data privacy, which is a huge concern in the era of generative AI. Manufacturing processes generate vast amounts of data, including proprietary information, operational details and employee personal data. Integrating AI systems requires extensive data to train models and generate insights, raising the risk of data breaches and unauthorized access.
Two key challenges to this are:
  • Data security—Ensuring that sensitive data is protected from cyber threats is critical. AI systems can become targets for hackers seeking to exploit vulnerabilities.
  • Compliance—Manufacturers must comply with stringent data protection regulations such as GDPR and CCPA, which mandate rigorous data handling and privacy standards.
To address these challenges, I recommend:
  • Robust Security Measures. Implement advanced encryption and cybersecurity protocols to protect data at rest and in transit. Regularly update security systems to address emerging threats.
  • Data Anonymization. Employ data anonymization techniques to remove personally identifiable information before using data in AI systems. This minimizes the risk of exposing sensitive information.
  • Compliance Audits. Conduct regular audits to ensure compliance with data protection regulations. Implement policies and procedures that align with legal requirements and industry best practices.

Ensuring fairness and transparency

The ethical use of AI is crucial to maintaining trust and integrity in manufacturing operations. Generative AI systems must be designed and deployed with considerations for fairness, accountability, and transparency to prevent biases and unintended consequences.
Here, the challenges include:
  • Bias in AI models—AI systems can inadvertently learn and perpetuate biases present in training data, leading to unfair outcomes.
  • Transparency—The black box nature of AI models can make it difficult to understand and explain how decisions are made, leading to accountability issues.
My recommendations to handle these challenges are:
  • Bias Mitigation. Develop and implement strategies to identify and mitigate biases in AI models. This includes diverse and representative training data, as well as continuous monitoring and testing for biased outputs.
  • Explainability. Invest in AI explainability techniques to make AI decision-making processes transparent and understandable. Tools such as SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations) can help demystify AI outputs.
  • Ethical Guidelines. Establish clear ethical guidelines for AI development and deployment. These should cover fairness, accountability, and transparency principles and be integrated into the organization’s AI governance framework.

Workforce Training

As generative AI becomes more relevant in manufacturing, it is essential to prepare the workforce for the changes it brings. This includes not only technical training but also creating and fostering a culture that embraces AI as a collaborative tool rather than a replacement.
The key challenges here include:
  • Skill gaps—Many workers may lack the necessary skills to effectively interact with and leverage AI systems, leading to resistance and inefficiencies.
  • Change management—The introduction of AI can lead to uncertainty and fear among employees about job security and role changes.
I recommend the following three steps to address these challenges:
  • Comprehensive Training Programs. System integrators can help to develop and deliver training programs that cover both the technical aspects of AI and its practical applications in manufacturing. This should include hands-on training, workshops, and ongoing support.
  • Collaborative Culture. Foster a culture of collaboration where AI is seen as an augmentation tool that enhances human capabilities and facilitates the operator’s daily activities. This can be done by highlighting use cases where AI has positively impacted operations.
  • Change Management Strategies. System integrators need to participate in implementing change management strategies to address concerns and promote a positive attitude towards AI. This includes clear communication, involvement of stakeholders in AI projects and providing clear information on AI boundaries.

Read the full article

 

How Generative AI is Revolutionizing Media & Entertainment – Generative AI transforming media & entertainment industry

How Generative AI is Revolutionizing Media & Entertainment

Generative AI transforming media & entertainment industry

By Sumedha Sen

Generative AI has brought significant changes in the media and entertainment industry. In the realm of media and entertainment, the content can be dynamically produced and customized which was earlier unimaginable.

Generative artificial intelligence (AI) has transformed the perceptions of the audience about entertainment and media. With generative AI, individuals can make personalized content based on consumer preferences and behavior. Here, we will explore how generative AI revolutionizing media and entertainment:

Accessing Insights into Audience Preferences

Generative AI that generates content assists creators, regardless of their scale, in comprehending what their audience likes, follows, and repeats. In the context of deciding what kind of content needs to be generated or tailored for which groups’, internet streaming platform is a very good illustration of where artificial intelligence could help.

For instance, a video streaming platform may leverage AI technologies incorporating natural language processing to create content for questioning their audience, provide instantaneous feedback on the type of stuff they want to see or do most in their free time and even possible future projects.

Reimagining the Creative Process

Generative AI can enhance and bolster the natural creativity that is already prevalent in the entertainment and media sectors across various mediums. The scope for reductions in time, expenses, and effort, particularly in repetitive tasks, is significant.

Generative AI applications can contribute to value at every stage of the creative journey, beginning with initial steps like storyboarding and outlining scripts.

For instance, an AI application might create storyboards from a given prompt template, or it could analyze current storyboards to suggest improvements in layout, visual narrative, and framing.

Subsequently, AI applications that are generative have prospects that can encompass the use of video effects, and also assistance in the orchestration of music, or even editing changes as well as replacements of recorded dialogues.

Such possibilities are already growing with the development of text-to-video generation.

Providing Immersive and Interactive Experiences

Companies in the entertainment and media sectors are already leveraging virtual reality (VR) and augmented reality (AR) technologies, and the integration of generative AI can enhance these efforts, making them more fluid and effective.

For instance, AI technologies can assist users in crafting digital personas that resemble their real-life selves, mimicking their gestures and actions.

This degree of realism and customization contributes to the depth and interactivity of VR/AR experiences, rendering them more captivating and realistic.

Within the metaverse, media organizations can employ generative AI to craft captivating experiences that merge elements of fantasy with reality. A possible outcome of this strategy is a VR narrative where individuals interact with AI-created characters.

These AI characters will react to inquiries, participate in dialogues, and even modify the plot based on the decisions of the user.

Transforming Content Delivery and Consumption

Businesses are using generative AI to produce large amounts of content through various platforms and situations, such as social networks, video, and detailed articles.

The possibility of AI is to increase the quantity, maintain the standard, and broaden the focus on important consumer groups.

AI-Powered Film Production

ChatGPT furnished every character’s dialogue, the placement of the camera, their attire, and their facial reactions. Looking ahead, it’s anticipated that AI will create entire movies.

Generative AI has the capability to create top-notch content. Moreover, the short movie gives viewers a glimpse into what’s to come in the realm of narrative, revealing a vision of a balanced relationship between humans and AI in the field of creativity.

AI-Powered Image Generation

Models such as Stable Diffusion, DALL-E, and Midjourney are capable of producing top-notch, incredibly lifelike images using text inputs. Within the realm of media and entertainment, creators can incorporate these images into various forms of content, including blogs, articles, social media updates, and ads.

This approach not only reduces the expenses associated with content creation but also enhances the artistic value of the work. Additionally, these images serve as starting points for generating AI-driven videos.

Content Localization

Content localization involves modifying and translating material to suit various languages, cultures, and areas.

This includes translating written content, developing conversations, crafting instructions, and providing content in local accents, among other tasks.

For instance, MagellanTV employs AI-driven generative technology to expand its collection of streaming documentaries for international audiences.

While the majority of the content was originally produced in English, the use of AWS services like Polly, Transcribe, and Translate streamlines the process of creating an automated system for dubbing and captioning.

In the gaming sector, content localization plays a crucial role. AI can assist game developers in overcoming language obstacles, thereby enhancing the inclusivity of their games — a feat that is more achievable with AI than with manual methods.

Furthermore, localizing content can enhance the gaming experience for players, leading to higher levels of engagement, retention, and loyalty.

Digital Avatars and Characters

Virtual characters controlled by generative AI can serve as a budget-friendly option for making and bringing to life digital figures, since AI diminishes the necessity for employing expert performers.

Businesses are investigating how artificial intelligence can create avatars that are both lifelike and adaptable, suitable for films, video games, and immersive virtual environments.

Businesses that adopt this technology will have the capability to create distinctive and valuable content with greater efficiency than at any other time. The entertainment industry will have to utilize appropriate instruments, reduce potential dangers, and track the views of consumers as they introduce generative AI.

FAQs

How is AI changing media?

AI is revolutionizing media by automating content creation, personalizing user experiences, and enhancing data analytics. It enables real-time language translation, generates tailored recommendations, and optimizes ad targeting. Additionally, AI-driven tools streamline video editing and improve content accessibility, fundamentally transforming how media is produced and consumed.

How generative AI is transforming the creative process?

Generative AI transforms the creative process by enabling rapid idea generation, automating content creation, and enhancing collaboration. It produces music, art, and writing, offering inspiration and efficiency. Artists and creators leverage AI to explore new styles, streamline workflows, and push the boundaries of creativity, revolutionizing traditional artistic methods.

How does generative AI generate new content?

Generative AI creates new content by learning patterns from vast datasets using machine learning models like GANs or transformers. It combines and reinterprets existing elements to produce novel outputs, such as text, images, or music. This process involves understanding context, style, and structure to generate coherent and original content.

How is generative AI changing OTT platforms?

Generative AI is revolutionizing OTT platforms by personalizing content recommendations, automating subtitle generation, and creating engaging trailers. It enhances user experiences through tailored content suggestions and improves accessibility. AI-generated insights help platforms optimize content strategies, increasing viewer retention and satisfaction while driving innovation in digital entertainment.

Which is the best generative AI tool?

Determining the best generative AI tool depends on the application. OpenAI’s GPT-4 excels in natural language processing, while DALL-E and MidJourney are leading in image generation. Google’s BERT is notable for understanding context in text. Each tool has unique strengths, making them suited for different creative and functional tasks.

Read the full article

 

Tokens are a big reason today’s generative AI falls short

Tokens are a big reason today’s generative AI falls short

By Kyle Wiggers

Generative AI models don’t process text the same way humans do. Understanding their “token”-based internal environments may help explain some of their strange behaviors — and stubborn limitations.

Most models, from small on-device ones like Gemma to OpenAI’s industry-leading GPT-4o, are built on an architecture known as the transformer. Due to the way transformers conjure up associations between text and other types of data, they can’t take in or output raw text — at least not without a massive amount of compute.

So, for reasons both pragmatic and technical, today’s transformer models work with text that’s been broken down into smaller, bite-sized pieces called tokens — a process known as tokenization.

Tokens can be words, like “fantastic.” Or they can be syllables, like “fan,” “tas” and “tic.” Depending on the tokenizer — the model that does the tokenizing — they might even be individual characters in words (e.g., “f,” “a,” “n,” “t,” “a,” “s,” “t,” “i,” “c”).

Using this method, transformers can take in more information (in the semantic sense) before they reach an upper limit known as the context window. But tokenization can also introduce biases.

Some tokens have odd spacing, which can derail a transformer. A tokenizer might encode “once upon a time” as “once,” “upon,” “a,” “time,” for example, while encoding “once upon a ” (which has a trailing whitespace) as “once,” “upon,” “a,” ” .” Depending on how a model is prompted — with “once upon a” or “once upon a ,” — the results may be completely different, because the model doesn’t understand (as a person would) that the meaning is the same.

Tokenizers treat case differently, too. “Hello” isn’t necessarily the same as “HELLO” to a model; “hello” is usually one token (depending on the tokenizer), while “HELLO” can be as many as three (“HE,” “El,” and “O”). That’s why many transformers fail the capital letter test.

“It’s kind of hard to get around the question of what exactly a ‘word’ should be for a language model, and even if we got human experts to agree on a perfect token vocabulary, models would probably still find it useful to ‘chunk’ things even further,” Sheridan Feucht, a PhD student studying large language model interpretability at Northeastern University, told TechCrunch. “My guess would be that there’s no such thing as a perfect tokenizer due to this kind of fuzziness.”

This “fuzziness” creates even more problems in languages other than English.

Many tokenization methods assume that a space in a sentence denotes a new word. That’s because they were designed with English in mind. But not all languages use spaces to separate words. Chinese and Japanese don’t — nor do Korean, Thai or Khmer.

A 2023 Oxford study found that, because of differences in the way non-English languages are tokenized, it can take a transformer twice as long to complete a task phrased in a non-English language versus the same task phrased in English. The same study — and another — found that users of less “token-efficient” languages are likely to see worse model performance yet pay more for usage, given that many AI vendors charge per token.

Tokenizers often treat each character in logographic systems of writing — systems in which printed symbols represent words without relating to pronunciation, like Chinese — as a distinct token, leading to high token counts. Similarly, tokenizers processing agglutinative languages — languages where words are made up of small meaningful word elements called morphemes, such as Turkish — tend to turn each morpheme into a token, increasing overall token counts. (The equivalent word for “hello” in Thai, สวัสดี, is six tokens.)

In 2023, Google DeepMind AI researcher Yennie Jun conducted an analysis comparing the tokenization of different languages and its downstream effects. Using a dataset of parallel texts translated into 52 languages, Jun showed that some languages needed up to 10 times more tokens to capture the same meaning in English.

Beyond language inequities, tokenization might explain why today’s models are bad at math.

Rarely are digits tokenized consistently. Because they don’t really know what numbers are, tokenizers might treat “380” as one token, but represent “381” as a pair (“38” and “1”) — effectively destroying the relationships between digits and results in equations and formulas. The result is transformer confusion; a recent paper showed that models struggle to understand repetitive numerical patterns and context, particularly temporal data. (See: GPT-4 thinks 7,735 is greater than 7,926).

That’s also the reason models aren’t great at solving anagram problems or reversing words.

So, tokenization clearly presents challenges for generative AI. Can they be solved?

Maybe.

Feucht points to “byte-level” state space models like MambaByte, which can ingest far more data than transformers without a performance penalty by doing away with tokenization entirely. MambaByte, which works directly with raw bytes representing text and other data, is competitive with some transformer models on language-analyzing tasks while better handling “noise” like words with swapped characters, spacing and capitalized characters.

Models like MambaByte are in the early research stages, however.

“It’s probably best to let models look at characters directly without imposing tokenization, but right now that’s just computationally infeasible for transformers,” Feucht said. “For transformer models in particular, computation scales quadratically with sequence length, and so we really want to use short text representations.”

Barring a tokenization breakthrough, it seems new model architectures will be the key.

Read the full article

Simply configure: GenISys brings generative AI to plant engineering

Simply configure: GenISys brings generative AI to plant engineering

By chemeurope

In the “GenISys” project, researchers at the University of Wuppertal are working with two practice partners to develop generative AI models to make the construction of bottling plants more intelligent and resource-efficient in the future. The overarching aim is to promote the use of artificial intelligence (AI) in relevant sectors of the economy.

Generative AI models are designed to generate new content from existing data. The models are already integrated into many business and user applications and demonstrate impressive capabilities, for example in generating human-like texts. “In the industrial production sector, however, the known potential and performance of generative AI approaches remains virtually untapped. This is partly because AI methods are not yet adapted to areas of application with very specific requirements,” explains Dr Hasan Tercan, Group Leader research area “Industrial Deep Learning” at the Institute for Technologies and Management of Digital Transformation at the University of Wuppertal.

Complex, cost-intensive, time-consuming

One such special area of application is the design and construction of industrial filling systems, for example for powdery and granular materials such as cement, which have to be filled into bags in mass production. The complex, partly manual configuration process of these systems is characterised by laboratory tests to determine the properties of the material to be filled as well as the development and multi-stage testing of a system prototype. In the event of new operating requirements and changing material properties, further necessary adjustment steps follow during operation of the system. “This labour-intensive nature of the design process, combined with the recurring need to redefine parameters due to material changes, underlines the need for a more innovative and adaptable approach to plant configuration,” says Tercan.

The scientist and his team are working together with software company Snap and plant manufacturer Haver & Boecker on the recently launched “GenISys” research project to reduce the number of test cycles with the help of digital technologies and the use of generative AI processes. Their aim is not only to drive forward the implementation of innovative ideas and services in the industry – lower production costs and less use of materials also protect the environment. According to the project partners, the significance of the innovation goes far beyond its direct application in mechanical and plant engineering. As the AI development and training process is carefully designed for adaptability and expandability, the application framework can later be reused seamlessly in different contexts – for example in the form of a licence model for an AI module kit – which enables integration into other industries.

For more details: Project approach

The vision of the project is to develop an AI-based, easy-to-use and interactive software application for plant construction companies and plant operators. The starting point for “GenISys” is data and information about a customer order, on the basis of which the software to be developed is to configure a new filling line. The data includes material properties of the product to be filled – such as particle size and density – which were determined by laboratory tests, as well as existing microscopic images of the product, which were previously mainly taken for documentation and verification purposes. Historical data from thousands of system configurations and product properties are also available to train the AI models integrated into the software.

In order to make the software suitable for use, the architecture of the AI models, training methods, modularisation strategies for integration into existing business processes and automation strategies for their continuous optimisation as well as concepts for integrating human feedback must be adapted and in some cases newly developed.

For example, the researchers are using advanced methods from the field of AI-based image recognition (convolutional neural network) to automatically determine missing or difficult to determine characteristics such as abrasion properties and moisture of the filling product from the images and thus enrich the database. AI models are also being developed and trained (including conditional generative adversarial network models), which generate the appropriate system configuration based on the input data. In addition, according to the considerations at the start of the project, separate artificial neural networks could be used to evaluate the solution found. The evaluation in turn flows into the further training of the AI models.

“A key aspect of the project is the integration and further development of innovative learning strategies for data processing and model training, which we use to ensure that a deployed AI model can continuously adapt to new operating conditions such as material changes, new systems or use cases,” explains Tercan. The human factor also comes into play when it comes to learning: the software should later enable the operating personnel to provide feedback, check recommendations and correct potential errors in the configuration. Tercan: “The feedback loop also ensures that the AI system continues to learn and adapt on this basis, gradually improving the accuracy of its recommendations.”

Read the full article

 

THE BIGGER PICTURE: HOW TO SPOT DEEPFAKE IMAGES AS GENERATIVE AI TOOLS CONTINUE TO ADVANCE

THE BIGGER PICTURE: HOW TO SPOT DEEPFAKE IMAGES AS GENERATIVE AI TOOLS CONTINUE TO ADVANCE

By Kelvin Chan and Ali Swenson

AI fakery is quickly becoming one of the biggest problems confronting us online. Deceptive pictures, videos, and audio are proliferating as a result of the rise and misuse of generative artificial intelligence tools.

With AI deepfakes cropping up almost every day, depicting everyone from Taylor Swift to Donald Trump to Katy Perry attending the Meta Gala, it is getting harder to tell what is real from what is not.

Video and image generators like DALL-E, Midjourney and OpenAI’s Sora make it easy for people without any technical skills to create deepfakes — just type a request and the system spits it out.

These fake images might seem harmless. But they can be used to carry out scams and identity theft or propaganda and election manipulation. Here is how to avoid being duped by deepfakes:

HOW TO SPOT A DEEPFAKE

In the early days of deepfakes, the technology was far from perfect and often left telltale signs of manipulation. Fact-checkers have pointed out images with obvious errors, like hands with six fingers or eyeglasses that have differently shaped lenses.

But as AI has improved, it has become a lot harder. Some widely shared advice — such as looking for unnatural blinking patterns among people in deepfake videos — no longer holds, said Henry Ajder, founder of consulting firm Latent Space Advisory and a leading expert in generative AI.

Still, there are some things to look for, he said. A lot of AI deepfake photos, especially of people, have an electronic sheen to them, “an aesthetic sort of smoothing effect” that leaves skin “looking incredibly polished,” Ajder said.

He warned, however, that creative prompting can sometimes eliminate this and many other signs of AI manipulation. Check the consistency of shadows and lighting. Often the subject is in clear focus and appears convincingly lifelike but elements in the backdrop might not be so realistic or polished.

LOOK AT THE FACES

Face-swapping is one of the most common deepfake methods. Experts advise looking closely at the edges of the face. Does the facial skin tone match the rest of the head or the body? Are the edges of the face sharp or blurry?

If you suspect video of a person speaking has been doctored, look at their mouth. Do their lip movements match the audio perfectly?

Ajder suggests looking at the teeth. Are they clear, or are they blurry and somehow not consistent with how they look in real life?

Cybersecurity company Norton says algorithms might not be sophisticated enough yet to generate individual teeth, so a lack of outlines for individual teeth could be a clue.

THINK ABOUT THE BIGGER PICTURE

Sometimes the context matters. Take a beat to consider whether what you are seeing is plausible. The Poynter journalism website advises that if you see a public figure doing something that seems “exaggerated, unrealistic or not in character,” it could be a deepfake.

For example, would the pope really be wearing a luxury puffer jacket, as depicted by a notorious fake photo? If he did, wouldn’t there be additional photos or videos published by legitimate sources?

At the Met Gala, over-the-top costumes are the whole point, which added to the confusion. But such big name events are typically covered by officially accredited photographers who produce plenty of photos that can help with verification. One clue that the Perry picture was bogus is the carpeting on the stairs, which some eagle-eyed social media users pointed out was from the 2018 event.

USING AI TO FIND THE FAKES

Another approach is to use AI to fight AI. OpenAI said it is releasing a tool to detect content made with DALL-E 3, the latest version of its AI image generator. Microsoft has developed an authenticator tool that can analyze photos or videos to give a confidence score on whether it is been manipulated.

Chipmaker Intel’s FakeCatcher uses algorithms to analyze an image’s pixels to determine if it is real or fake. There are tools online that promise to sniff out fakes if you upload a file or paste a link to the suspicious material.

But some, like OpenAI’s tool and Microsoft’s authenticator, are only available to selected partners and not the public. That is partly because researchers don’t want to tip off bad actors and give them a bigger edge in the deepfake arms race.

Open access to detection tools could also give people the impression they are “godlike technologies that can outsource the critical thinking for us” when instead we need to be aware of their limitations, Ajder said.

THE HURDLES TO FINDING FAKES

All this being said, artificial intelligence has been advancing with breakneck speed and AI models are being trained on internet data to produce increasingly higher-quality content with fewer flaws. That means there is no guarantee this advice will still be valid even a year from now.

Experts say it might even be dangerous to put the burden on ordinary people to become digital Sherlocks because it could give them a false sense of confidence as it becomes increasingly difficult, even for trained eyes, to spot deepfakes.

052024_AIDeepFakes_02_PopeAI

 

Enterprises Must Modernize Before Starting Generative AI Projects

Enterprises Must Modernize Before Starting Generative AI Projects

By Paras Chandaria

Strategies should include upgrading data management, ensuring data integrity and securing cloud environments
Generative AI has moved from the theoretical to the mainstream more rapidly than perhaps any other technology in recent memory. The ease of access and intuitive nature of generative AI have encouraged widespread adoption for both personal and business use and millions of daily users are understanding that the more they experiment with generative AI the better it gets. We’ve seen many once-vaunted technologies fail to find their “killer apps” while interest slowly fizzles out, but generative AI seems living up to the hype.

There is already an understanding that generative AI, though still in its infancy, has the potential to revolutionize various domains and industries by enhancing creativity, personalization and innovation. The sheer range of its uses has created a rush to adopt generative AI as well as to invest in bespoke solutions and training as enterprises fear losing a step to more tech-savvy competitors.

Openness to new technology is good, but firms must remember that there is no one-size-fits-all approach to how generative AI is integrated into their organizations. Identifying the right tactics will require taking a step back to evaluate if the infrastructure is fully prepared to support and executive any generative AI plan.

Effective integration of generative AI effectively requires that enterprises do more than simply update their technology. Instead, they must adopt a comprehensive modernization strategy. While this will look different for each company, a nuanced strategy will typically involve upgrading data management and cloud infrastructure while simultaneously preparing for the inevitable challenges these advancements bring. Ensuring data integrity and secure cloud environments is not only crucial for long-term AI success, but for a range of other efforts as well.

Addressing integration challenges — cultural resistance, data quality, or system compatibility — requires a comprehensive approach. These obstacles can be overcome through cultural adaptation, stringent data governance and selecting interoperable AI solutions. All of these tactics can help ensure a seamless transition to AI-enabled operations. In addition, businesses must remain mindful of existing regulations. The rapid pace of AI development as well as evolving legal and privacy considerations necessitate a well-thought-out integration plan that adheres to strict data privacy standards and incorporates privacy-by-design principles. 

The rise of generative AI is reshaping our approach to workforce management. As AI takes on more tasks, we’re shifting resources, focusing human talent on areas that demand creativity, strategic insight and innovation. This realignment ensures our workforce remains a vital asset in the AI-enhanced landscape, complementing technological capabilities with irreplaceable human skills. 

However, once a business has decided and implemented its ideal AI approach, it is critical to build in adaptability so that it remains flexible enough to meet future needs. This will consider potential regulatory changes but also the ability to adapt to changing environments and scale up as demand grows. Beyond easing the development and deployment of generative AI, modernized infrastructure, which includes robust and reliable hardware, software, data and security platforms, supports the ongoing maintenance and scalability of flexible AI applications across an organization.

Embracing generative AI is a huge step and can define an organization’s trajectory for years to come. This underscores the importance of identifying the right approach so that businesses will not be forced to backtrack if initial projections prove inaccurate.  After all, in a field that is evolving this rapidly, there are bound to be significant disruptions over the coming years.

While generative AI has seemingly limitless potential, businesses should avoid leaping before they look and must instead take time to develop their approach. In doing so, they will be better positioned to forge a strategy that adequately addresses the cultural, technical regulatory and ethical aspects of this emerging technology. Only by doing so can businesses position themselves for success while also ensuring that AI initiatives are consistent with their objectives, values and vision.

Read the full article

How Generative AI (GenAI) can empower corporate tax departments

How Generative AI (GenAI) can empower corporate tax departments

By Thomson Reuters Tax and Accounting

The recent surge in artificial intelligence advancements has started to transform the professional services industry, and corporate tax departments are no exception.

Generative Artificial Intelligence (GenAI) presents a game-changing opportunity to transform corporate tax operations by enhancing human capabilities and driving strategic decision-making.  

Insights from the 2024 Generative AI in Professional Services Report

In the realm of professional services, the advent of GenAI has started a wave of transformation, particularly within corporate tax departments. 

recent survey conducted by Thomson Reuters unveiled a remarkable revelation: 81% of professionals firmly believe in the applicability of GenAI to their work. This overwhelming consensus underscores the profound recognition of GenAI’s transformative potential. For corporate tax professionals, GenAI presents an exceptional opportunity to streamline their unique tax processes, mitigate potential risks, and extract profound insights from intricate financial data. 

The integration of GenAI into tax workflows offers a wide set of benefits. By automating repetitive and time-consuming tasks, corporate tax specialists can redirect their focus toward more strategic and value-added activities. GenAI’s innate capability to analyze vast volumes of data empowers tax accountants to proactively identify potential risks and compliance concerns, thereby ensuring accuracy and unwavering adherence to regulatory mandates. Moreover, GenAI unveils valuable insights hidden within tax data, enabling tax professionals to make well-informed decisions and optimize tax strategies with unparalleled precision. 

The marriage of GenAI and tax operations holds immense promise for the future. As technology continues to evolve at an exponential pace, tax professionals must embrace GenAI’s transformative capabilities to remain at the forefront of their field. By harnessing the power of GenAI, tax departments can unlock new levels of efficiency, accuracy, and strategic decision-making, contributing significantly to the overall success and competitiveness of their organizations. 

Enhancing accuracy and compliance with generative AI

Generative AI (GenAI) has the potential to revolutionize corporate tax departments by enhancing accuracy and compliance. AI-powered document review and analysis tools can help automate the extraction and validation of data from various sources, reducing the risk of human error and ensuring that all relevant information is captured accurately. Additionally, AI algorithms can analyze large volumes of data to identify patterns, trends, and potential risks, enabling tax professionals to make more informed decisions and proactively address any issues. 

Furthermore, GenAI can automate routine and repetitive tasks, freeing up valuable time for tax professionals to focus on higher-value activities that require critical thinking and strategic decision-making. This can significantly improve operational efficiency and productivity within the tax department. 

Collaboration tools powered by GenAI can facilitate seamless communication and knowledge sharing among tax professionals, both within the same organization and across different teams. This promotes a culture of collaboration and ensures that all relevant expertise is leveraged to achieve optimal outcomes. 

GenAI can also help in cost management by optimizing resource allocation and identifying opportunities for process improvement. By automating various tasks and processes, GenAI can help tax departments reduce operational costs and improve their overall cost effectiveness. 

Optimizing operational efficiency in corporate tax departments

The advent of Generative Artificial Intelligence (GenAI) presents a groundbreaking opportunity for tax departments to revolutionize their operational efficiency and achieve remarkable outcomes. 

GenAI-powered tools seamlessly streamline processes such as data entry, document management, and report generation, resulting in significant time savings and enhanced accuracy. This remarkable transformation enables tax professionals to focus their attention on tasks that require their specialized knowledge and expertise, maximizing their impact on the organization’s overall performance. 

Furthermore, GenAI fosters a culture of collaboration and knowledge sharing within tax departments. AI-driven collaboration tools facilitate seamless communication among team members, enabling them to effortlessly share insights, seek advice, and collectively make informed decisions. This collaborative environment promotes continuous learning and improvement, fostering an organizational culture that thrives on innovation and excellence. 

GenAI’s capabilities extend beyond streamlining processes and fostering collaboration. It empowers tax departments to optimize resource allocation and identify opportunities for process improvement. By meticulously analyzing data and discerning hidden patterns, AI algorithms provide invaluable insights into resource usage and inefficiencies. Armed with this knowledge, tax departments can make data-driven decisions, optimize their operations with surgical precision, and achieve substantial cost savings. 

GenAI offers a plethora of advantages for tax departments, ranging from enhanced accuracy and compliance to optimized operational efficiency. By embracing GenAI, tax professionals can unlock new horizons of efficiency, collaboration, and cost-effectiveness, propelling their organizations toward unprecedented success. 

Using GenAI in corporate tax for strategic decision-making

Generative AI (GenAI) is transforming the way corporate tax departments make strategic decisions. This groundbreaking technology empowers tax professionals with unprecedented capabilities, enabling them to navigate the complex and ever-changing landscape of taxation with remarkable precision and efficiency. 

One of the key advantages of GenAI lies in its ability to provide real-time insights into the impact of tax regulations and changes. Tax professionals can leverage GenAI-powered tools to stay ahead of the curve, constantly tracking the latest developments and assessing their potential implications on the organization’s tax liability. This proactive approach ensures compliance with evolving tax laws, minimizing the risk of penalties and reputational damage. 

GenAI also proves invaluable in simulating various tax scenarios and forecasting potential outcomes. Tax professionals can apply GenAI models to assess the impact of different tax strategies, investments, and business decisions on the organization’s tax liability. This enables them to make well-informed choices that align with the organization’s long-term goals and mitigate potential risks, ensuring sustainable financial success. 

Moreover, GenAI enhances the efficiency of tax departments by automating routine and repetitive tasks. This includes tasks such as data extraction, document review, and report generation. By automating these processes, GenAI liberates tax professionals from mundane tasks, allowing them to focus their expertise on higher-value activities that require critical thinking and strategic decision-making. This leads to increased productivity, improved accuracy, and enhanced overall performance of the tax department. 

Read the full article

 

How Gendo’s Generative AI Platform is Transforming Architectural Visualizations

How Gendo’s Generative AI Platform is Transforming Architectural Visualizations

By Maria-Cristina Florian

The introduction of AI generative tools represents one of the most significant technological revolutions in the field of architecture and design. While there is concern about this changing the working landscape for professionals in the field, a significant number of practices are embracing the new technology. Architectural visualizations represent one of the main areas where these changes take effect. However, the array of AI tools accessible to non-specialist users rarely allows for true control over the design process, often offering general interpretations of scripts. This can be helpful during early conceptual design phases but loses its appeal soon after. Gendo, a new browser-based app, aims to change this, offering the possibility to not only generate visualizations in seconds but also to edit and customize them, even introducing real-life products in the design. Until August 3, readers of ArchDaily can register and use the code ARCHDAILY50 to get 50% off any plan.

Architectural visualizations are critical in helping firms develop their design solutions, collaborate with clients, and win competitive bids, but the process is often complex and time-consuming. Gendo’s generative AI platform accelerates this process, enabling architects to produce complex visualizations rapidly while maintaining control over the output. This is obtained by accommodating a variety of input, including 2D drawings or sketches, and text prompts, with plans to further develop the program to accommodate 3D models. The beta version has already been employed by internationally recognized practices like Zaha Hadid ArchitectsKPF, and David Chipperfield Architects.

How Gendo's Generative AI Platform is Transforming Architectural Visualizations - Image 2 of 35
Gendo Style Output. Image Courtesy of Gendo
How Gendo's Generative AI Platform is Transforming Architectural Visualizations - Image 15 of 35
Gendo Style Output. Image Courtesy of Gendo
How Gendo's Generative AI Platform is Transforming Architectural Visualizations - Image 12 of 35
Gendo Style Output. Image Courtesy of Gendo

Starting with the user’s input, be it text or image-based, users can then use text prompts to obtain detailed visualizations in various styles and with a range of options to choose from. Once the images are generated, Gendo also allows for extensive customization, allowing architects and designers to tweak specific regions, adjust colors, lighting intensity or direction, structural features, or add specific furniture. The program also allows the creation of culturally appropriate clothing and topographically correct trees, ensuring that every detail in the rendering can be controlled by the user. Gendo‘s ongoing development aims to expand its capabilities further, with proposed advancement aiming to integrate real-life products, including materials, finishes, or furniture into the program’s capabilities.

Founded by architectural designer and visualizer George Proud and software engineer Will Jones, Gendo was designed specifically for architects and designers, aiming to become a useful tool not only in the conceptual phases but throughout the design process. By streamlining the visualization process, Gendo aims to enable architects to focus more on the creative aspects of their work, empowering them to test out ideas quickly without losing the original scale and characteristics of their designs.

How Gendo's Generative AI Platform is Transforming Architectural Visualizations - Image 9 of 35
Gendo Design to Image Input. Image Courtesy of Gendo
How Gendo's Generative AI Platform is Transforming Architectural Visualizations - Image 5 of 35
Gendo Design to Image Output. Image Courtesy of Gendo

In our industry, detail and precision is everything. Gendo has been designed specifically for these professionals; we’ve built an AI platform that speeds up design work and allows creativity to flourish. We’re eliminating the burdensome processes currently involved in visualizations and instead making it an efficient, instinctive, and empowering experience. Gone is the era of waiting days to receive one small image of a tree to add to your design. – George Proud, CEO & co-founder at Gendo

How Gendo's Generative AI Platform is Transforming Architectural Visualizations - Image 31 of 35
Gendo Design to Image Input. Image Courtesy of Gendo
How Gendo's Generative AI Platform is Transforming Architectural Visualizations - Image 32 of 35
Gendo Design to Image Output. Image Courtesy of Gendo

As AI continues to evolve, its integration into the field of architecture is proving to have a transformative effect on the profession. From enhancing urban planning to making design more accessible and efficient, artificial intelligence raises important questions about the future of creativity and expertise in the industry. At an urban scale, AI-informed urban planning holds significant promise for creating more intelligent, efficient, and sustainable cities. The technology also has applications in the efforts to decarbonize the building industry, as it enables changes and assessments from conception to building implementation. At an individual level, AI is also heralded for its potential to democratize design, lowering the entrance threshold in the field.

Read the full article

Generative AI: 5 tips to create the best visual assets for your communication

Generative AI: 5 tips to create the best visual assets for your communication

By Maddyness UK

Generative AI is now as accessible to the general public as it is to businesses. However, to master its use in professional communication requires the understanding of its nuances.

In the last two years, generative Artificial Intelligence has transformed the world of visual creation. AI offers endless possibilities for image creation, enabling businesses and individuals alike to produce unique and personalised visuals. Today however, instead of mastering visual tools, creators need to master the art of words to create their assets. Follow these five tips to use AI to generate high-quality images.

“Generative AI models are trained on millions, if not billions, of examples and can predictively generate texts or images,” explains Benoît Raphaël, a journalist and entrepreneur specialising in AI. “Hundreds of millions of people use it, and anyone can generate their own images. But AI is not a quick fix; it is actually quite the opposite!”

“We see generative AI for image creation as a powerful tool that can help SMEs and creators make their work in a more efficient way,” says Sandra Michalska, Creative Insights Manager EMEA at iStock, which launched its own generative AI earlier this year, trained exclusively on its vast creative library. “According to a study by our VisualGPS research platform, 67% of British consumers are excited about the idea that AI can improve their lives by completing tasks more quickly. AI also offers creators the chance to produce visuals that have never been seen before!”

Choose your words wisely

A well-structured prompt will yield quality images. Creatives no longer need graphic skills and expertise but must be able to articulate their ideas with words — known as visual prompting. Start with simple descriptions, being concise with the words you choose, play around with the order of your terms and aim not to exceed 50 words. Be descriptive and include the most important elements at the beginning of the text: the subject, the location, and then add details and artistic directions (“wide frame,” “natural light,” etc.). If you struggle to articulate your ideas as words, you can use tools that offer prompting aids.

“Generative AI by iStock provides a prompt creator to make the process faster and easier for users. If the prompt doesn’t generate the desired result, think about how you can describe the scene differently,” explains Sandra Michalska. “You can even search for images similar to those generated by our AI through reverse image search, which can be quicker than using keywords to find the right image.”

Improve your visual literacy

A strong visual literacy is crucial for refining your prompts. Quality images require an understanding of the rules of composition. It’s also crucial to master colour selection and how they harmonise.

“You’ll be able to take great photos if you have a good eye, which is why photographers excel with generative AI. This shows that AI is still just a tool,” states Benoit Raphael.

Choose the right tool

Free generative AI services and tools are not necessarily safe for commercial use, as they are trained on content for which they may not have the necessary permissions, leading to the potential for Intellectual Property and copyright infringements.

Some tools, like Generative AI by iStock, offer options that minimise legal risks. For example, iStock’s AI generator provides users with legally safe images based on secure and controlled content. As the AI is trained exclusively on their own authorised creative content library, it is not polluted by images from potentially unsafe data sets or sourced from the internet.

“Our goal is to offer SMEs a safe and affordable way to explore commercially safe generative AI for their marketing and advertising materials,” explains Sandra Michalska. “For example, Generative AI by iStock does not generate existing products, people, places, or other elements protected by copyright because these images are not part of our AI training. Beyond that, you can let your imagination run wild!”

Adapt your prompts based on your results

Think about what type of image will work best for your audience, messages, and business objectives. Then, try different combinations of terms and analyse the results to learn and improve. This work is an exercise in iteration, and you will rarely get the most optimal result from the first prompts.

“So test: add details and remove them! Transforming a prompt into an image is not a quick fix. It’s a visual work that must be done gradually, with a bit of chance,” asserts Benoît Raphaël.

Be transparent about your use of AI

AI can be seen as an additional tool, capable of enhancing human creativity and optimising creative workflows. However, if your campaign’s scope is based on authentic connection and trust, opt for traditionally produced images over AI-generated ones. According to an iStock VisualGPS study, 97% of British consumers believe that authentic images and videos are essential to building trust with a brand. If you use AI to create content depicting humans, you should consider tagging it as being AI-generated to avoid misleading your audience.

“Before you start using generative AI tools to create images, think about what type of image—AI-generated or traditionally produced (royalty-free)—aligns best with your audience and what you are aiming to achieve as a business. Human creativity can offer endless possibilities, especially when applied to prompts or visual concepts. AI is ideal for creating images that are difficult to recreate in real life, like a penguin standing on a road in the middle of a city. However, if you’re looking to tell the story of real people in a real place, then it might be better to use existing images that include or were created by those you aim to represent. Authenticity is essential if you seek to build trust with your consumers,” concludes Sandra Michalska.

Read the full article

Decoding How the Generative AI Revolution BeGAN

Decoding How the Generative AI Revolution BeGAN

By Gerardo Delgado

NVIDIA Research’s GauGAN demo set the scene for a new wave of generative AI apps supercharging creative workflows.

Generative models have completely transformed the AI landscape — headlined by popular apps such as ChatGPT and Stable Diffusion.

Paving the way for this boom were foundational AI models and generative adversarial networks (GANs), which sparked a leap in productivity and creativity.

NVIDIA’s GauGAN, which powers the NVIDIA Canvas app, is one such model that uses AI to transform rough sketches into photorealistic artwork.

How It All BeGAN

GANs are deep learning models that involve two complementary neural networks: a generator and a discriminator.

These neural networks compete against each other. The generator attempts to create realistic, lifelike imagery, while the discriminator tries to tell the difference between what’s real and what’s generated. As its neural networks keep challenging each other, GANs get better and better at making realistic-looking samples.

GANs excel at understanding complex data patterns and creating high-quality results. They’re used in applications including image synthesis, style transfer, data augmentation and image-to-image translation.

NVIDIA’s GauGAN, named after post-Impressionist painter Paul Gauguin, is an AI demo for photorealistic image generation. Built by NVIDIA Research, it directly led to the development of the NVIDIA Canvas app — and can be experienced for free through the NVIDIA AI Playground.

GauGAN has been wildly popular since it debuted at NVIDIA GTC in 2019 — used by art teachers, creative agencies, museums and millions more online.

Giving Sketch to Scenery a Gogh

Powered by GauGAN and local NVIDIA RTX GPUs, NVIDIA Canvas uses AI to turn simple brushstrokes into realistic landscapes, displaying results in real time.

Users can start by sketching simple lines and shapes with a palette of real-world elements like grass or clouds —- referred to in the app as “materials.”

The AI model then generates the enhanced image on the other half of the screen in real time. For example, a few triangular shapes sketched using the “mountain” material will appear as a stunning, photorealistic range. Or users can select the “cloud” material and with a few mouse clicks transform environments from sunny to overcast.

The creative possibilities are endless — sketch a pond, and other elements in the image, like trees and rocks, will reflect in the water. Change the material from snow to grass, and the scene shifts from a cozy winter setting to a tropical paradise.

Canvas offers nine different styles, each with 10 variations and 20 materials to play with.

Canvas features a Panorama mode that enables artists to create 360-degree images for use in 3D apps. YouTuber Greenskull AI demonstrated Panorama mode by painting an ocean cove, before then importing it into Unreal Engine 5.

Download the NVIDIA Canvas app to get started.

Consider exploring NVIDIA Broadcast, another AI-powered content creation app that transforms any room into a home studio. Broadcast is free for RTX GPU owners.

Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of what’s new and what’s next by subscribing to the AI Decoded newsletter.

Read the full article