Technology

Adobe Max 2024: All the major announcements around design and AI

Here’s everything announced at this year’s Adobe Max conference, from a new Firefly AI video model to fresh features and updates for Creative Cloud apps.

Last Update

on

Adobe Max 2024: All the major announcements around design and AI

Adobe usually makes plenty of big product launches and updates at its annual MAX event, and this year’s design conference is no exception. The creative software giant has announced its first generative AI video model, which is already available in Premiere Pro — beating rival offerings like OpenAI’s Sora and Google’s Veo to the market.

Creative Cloud apps like Photoshop, Illustrator, and InDesign are getting some new features, and the Frame.io cloud collaboration platform has been overhauled with its biggest update since it was launched in 2015, according to Adobe.

We’re collecting all the biggest announcements below for you to follow along.

Oct 17

Jess Weatherbed

Watch Adobe rotate a flat drawing of a bread basket as if it were a 3D object

Project Turntable for Adobe Illustrator is capable of rotating vector images without warping the design elements.
Image: Adobe

Adobe has been working on some experimental tech that could help speed up concept and planning work for graphic designers and audio engineers. Some of the “sneaks” previewed during Adobe’s MAX event include tools that can turn sketches into a variety of polished designs, and a feature for rotating 2D art as if it were a 3D object.

“Project Turntable” is capable of the latter. The tool allows users to click a button and then drag a slider along to automatically view and snap a vector image into a different viewing perspective — something that would typically require an artist to redraw the image entirely from scratch. The examples demonstrated at the event retained their original designs when rotated without warping into a new overall shape. For example, the dragon’s yellow underbelly and tail remained in the same position throughout all the changes.

Read Article >

Oct 17

Jess Weatherbed

What is a photo? (mashup edition).

Adobe snuck another experimental tool demo into its MAX event that can blend multiple photographs together by just clicking a button.

The results look a little artificial, but it can take a while for “Sneaks” projects to actually appear in Adobe apps like Photoshop — so the capabilities may improve when (or if) it does get released to the public.


Oct 16

Jess Weatherbed

Adobe’s experimental tool can identify an artist’s work online or on a tote bag

Project Know How builds on Adobe’s work with Content Credentials.
Illustration by Alex Castro / The Verge

One of Adobe’s most notable experiments this year could help combat misinformation and ensure artists are credited for their work, no matter where it appears online or offline. Announced during the Sneaks segment at Adobe Max, Project “Know How’ is an in-development tool that can link ownership of an image or video across any online platform, and a multitude of real-world surfaces like posters, tumblers, and textiles.

Project Know How builds on Adobe’s Content Credentials tech, which applies a digital tag to track where a piece of content has been posted, who owns it, and if/how it’s been manipulated. Providing an image or video has Content Credentials applied, the tool can help creators establish ownership over their content even if that authentication metadata has been stripped out. The demo I saw, while early in development, managed to display the Content Credentials data on an image just by recording it on a camera, even on a texture-heavy object like a tote bag.

Read Article >

Oct 15

Jess Weatherbed

Adobe teases AI tools that build 3D scenes, animate text, and make distractions disappear

Project “Clean Machine” easily removes distracting flashes and corrects overexposed footage.
Image: Adobe

Adobe is previewing some experimental AI tools for animation, image generation, and cleaning up video and photographs that could eventually be added to its Creative Cloud apps.

While the tools apply to vastly different mediums, all three have a similar aim — to automate most of the boring, complex tasks required for content creation, and provide creatives more control over the results than simply plugging a prompt into an AI generator. The idea is to enable people to create animations and images, or make complex video edits, without requiring a great deal of time or experience.

Read Article >

Oct 15

Jess Weatherbed

Adobe looks to a new era for generative AI.

After joking about “AI” being a drinking game trigger at MAX, Adobe’s chief product officer Scott Belsky said the company is moving away from the “prompt era” of the tech — which “cheapened and undermined the craft of creative professionals” by generating anything from text descriptions.

Instead, the new “control era” aims to improve creative workflows with AI in more specific ways within Creative Cloud apps.


Belsky says AI tools should help to remove frustrating labor-intensive tasks around content creation, and not produce a sloppy final product.
Image: The Verge / Jess Weatherbed

Oct 14

Adi Robertson

Generative AI is coming for the Barbie collectors.

If you like showing off dolls in their original packaging, you might soon be showcasing work from Adobe’s Firefly AI tools. Mattel and Adobe claim AI-generated (and human-refined) backdrops have “greatly shortened the time it takes to get toys into stores” by cutting out parts of the design process, and the results will hit stores soon.


Do you really need AI to make a picture of a basketball court?
Image: Mattel

Oct 14

Jess Weatherbed

You can now use Lightroom mobile like Google’s Magic Editor.

Adobe has added a bunch of new AI “quick actions” that automatically apply effects for retouching backgrounds, teeth, eyes, skin, and more.

Lightroom’s mobile apps also now have the “Generative Remove” feature that was introduced to the desktop editor in May — making it easier to delete annoying objects from your images on the go.


It's kinda like a semi-professional version of the popular Facetune app.
Image: Adobe

Oct 14

Jess Weatherbed

Adobe’s next AI Image generator update will make editing easier.

Teased during the demo for Project Concept, Adobe says V4 of its Firefly Image Model will allow users to highlight areas of a generated image to adjust without making it again from scratch— for example, adding a guitar to a specific surface.

V3 has only just rolled out to Creative Cloud apps but this latest update will be available soon according to Adobe.


Firefly Image V4 is already powering generative AI features in Adobe’s incoming Project Concept planning tool.
Image: The Verge / Jess Weatherbed

Oct 14

Jess Weatherbed

This Adobe project resembles one of Figma’s best features.

Dubbed “Project Concept,” this in-development planning app allows multiple creatives to hash out ideas in real time by mind-mapping inspirational images — just like Figma’s mood board tools.

Project Concept also includes a built-in generative AI “remix” feature that blends together aspects from multiple reference images. It’s not available yet, but Adobe says we’ll know more “in the near future.”


PreviousNext

1/2

Judging by the live demo presented at Adobe Max, the mood board canvas in Project Canvas can be MASSIVE.
Image: The Verge / Jess Weatherbed

Oct 14

Jess Weatherbed

Frame.io’s camera-to-cloud integration is mind-blowingly fast.

Some audience pictures snapped by Adobe Principal Director Terry White at today’s Max event started appearing in Frame.io in real-time as he was taking them, without needing to connect the camera to a computer.

And because his account was synced with Lightroom, they appeared there too — meaning there’s basically no delay for photographers to get their snaps ready for editing.


The pictures were appearing on the big screen within 1-2 seconds.
Image: The Verge / Jess Weatherbed

Oct 14

Jess Weatherbed

‘BOOM, WHATUP HOMIES’

Adobe design evangelist Michael Fugoso was so excited to demo Project Neo — an Illustrator-like app for 3D design that was teased last year — that it felt like Bill and Ted had taken to the stage.

Project Neo is available as a free beta right now but we’ll hear more about general availability in the coming months.


‘We’re in a 3D space homies! Sick!’
Image: The Verge / Jess weatherbed

Oct 14

Jess Weatherbed

Photoshop is getting a bunch of new AI tools

The updated Remove Tool in Photoshop can now find and remove common distractions for you.
Image: Adobe

Adobe is kicking off its annual Adobe Max conference today with the launch of new AI-powered features across its Creative Cloud apps. New AI features for Photoshop, like automatic background distraction removal and a more powerful Firefly generative AI model, are the biggest announcements, with Illustrator, InDesign, and Premiere Pro also getting new features that can help to speed up traditionally labor-intensive design tasks.

For example, a new “Distraction Removal” feature has been added to the Remove Tool. Remove already works a bit like Google’s Magic Eraser feature on Pixel phones, allowing users to quickly remove unwanted objects from their images by brushing over them. The new Distraction Removal feature, which Adobe teased last year, makes it even more like Magic Eraser by automatically identifying common distractions for you, like people, wires, and cables, and removing them with a single click.

Read Article >

Oct 14

Jess Weatherbed

Frame.io’s massive productivity update is now available for everyone

Frame.io V4 aims to make collaborating on large projects much easier by removing the need to jump between different apps.
Image:Adobe

The latest version of Frame.io, Adobe’s review and collaboration platform for Video and photography, is rolling today, making it easier to manage sprawling creative projects in a single app. Available for all users on web, iPhone, and iPad, Frame.io V4 is the biggest update to the platform since it was launched in 2015, according to Adobe, and adds new tagging and collaboration features that make it feel more like a workflow management tool, such as Trello and Asana.

It includes the “metadata” tagging model that was introduced in beta earlier this year, which allows users to assign custom tags like media type, assignee, due date, social media platform, and more to their files, making them easier to manage and review. Projects can also be broken down into new “Collections” folders that automatically update to reflect any changes or comments made to work, creating a smoother collaboration process for teams or multiple users.

Read Article >

Oct 14

Jess Weatherbed

Adobe’s AI video model is here, and it’s already inside Premiere Pro

Adobe’s Firefly Video Model can generate a range of styles, including ‘realism’ (as pictured).
Image: Adobe

Adobe is making the jump into generative AI video. The company’s Firefly Video Model, which has been teased since earlier this year, is launching today across a handful of new tools, including some right inside Premiere Pro that will allow creatives to extend footage and generate video from still images and text prompts.

The first tool — Generative Extend — is launching in beta for Premiere Pro. It can be used to extend the end or beginning of footage that’s slightly too short, or make adjustments mid-shot, such as to correct shifting eye-lines or unexpected movement.

Read Article >

More From GNN

Copyright © 2024. Vision Network Television Limited. All Rights Reserved.