With the Sensei AI engine powering features like Neural Filters in Photoshop or Acrobat’s Liquid Mode for more than ten years, Adobe’s suite of picture and video editing products has long used artificial intelligence to assist human users in performing their tasks. The first of the Firefly family of generative models, which the firm has collectively called, will produce both images and typeface effects, was unveiled by Adobe on Tuesday. It is the business’s next generation of AI features.
According to Tuesday’s statement from Adobe, David Wadhwani, president of the company’s digital media business, “Generative AI is the next evolution of AI-driven creativity and productivity, transforming the conversation between creators and computers into something more natural, intuitive, and powerful.” The productivity and creative confidence of all creators, from high-end creative professionals to the long tail of the creator economy, will increase with the introduction of Firefly by Adobe.
using it, aspiring digital artists won’t be constrained by their poor hand-eye coordination or general lack of artistic talent because they’ll be able to speak realistic pictures into reality using just their words. Additionally, Firefly is multimodal, which means that in addition to text-to-image conversion, it also allows for the creation of audio, video, graphics, and 3D models.
According to the company, the first model in the Firefly family was trained on “hundreds of millions” of images from Adobe’s Stock photo library, openly licenced content, and material that is in the public domain, practically ensuring that the model won’t lead to lawsuits like StableDiffusion did with the unpleasant Getty content. Additionally, it enables the payment of stock photographers and artists for the usage of their creations in the AIs’ training.
A carefully chosen collection of generated works and the prompts that inspired them are displayed on the input screen, which is where users provide their text-based prompt to the system. These seek to demonstrate the generative potential of the model and encourage other users to push the limits of their computer-assisted creativity.
The algorithm will return about a half-dozen or so initial image choices when the user enters their word prompt (in this case, Adobe’s PR used an adult standing on a beach with a double exposure effect utilising photographs taken from Adobe’s Stock photo collection). Following that, the user can choose from a variety of well-liked image styles and effects, make their own adjustments to the prompt, work with the AI, and generally muck around with the highly-steerable process until it produces what they’re after. Although none of the images from the sample feature hands, we were unable to accurately count the number of fingers because the resulting image quality was almost photorealistic.
Adobe’s licenced Stock library will serve as the training picture database at first, but the company is also investigating into letting individual users to include their own portfolios. This should make it possible for photographers who already have distinct aesthetics to duplicate those within the model so that the imagery it produces blends in with the user’s existing theme. When that would occur was not specified by the company.
A similar capability to the original model’s may customise font effects and produce wireframe logos from scanned sketches and doodles. It’s all really cool, but if it were misused, it may force a shockingly large number of digital artists out of employment. That is what Adobe’s Content Authenticity Initiative (CAI) aims to avoid.
Adobe’s attempt to provide some sort of guidelines in this brand-new, Wild West sector of Silicon Valley is represented by the CAI. In the proposed industry operating standards, ethical conduct and openness in the AI training process will be established and regulated. For instance, the CAI would develop a “do not train” tag that functions in a similar manner as robots.txt. The persistent identifier would stay on the artwork as it circulated online, letting everyone who came across it know that it was created by a machine. According to the press release, the initiative has received support from over 900 organisations globally, “including media and tech companies, NGOs, academics, and others.”