arrow back

Training Course: The best neural Networks (AI) for improving, creating and editing images and videos [from scratch to Prof. level] from CGBandit

The course description was updated on October 30, due to the expansion of the program with additional materials.

The online training course is suitable for a wide range of users, although it will be especially valuable for CG artists, designers, architects, social media managers, bloggers, photographers, 2D and 3D graphic artists, visual content creators, etc. The course curriculum is designed for both absolute beginners starting from scratch and users who already have experience working with neural networks.

Hello, dear friends and students! I’m Valentin Kuznetsov. I am the head of the CGBandit educational project, as well as the creator and owner of Aladamus.com, an online service with AI tools.

I will be glad to present and tell you about the training program of our new course: "The best neural Networks (AI) for improving, creating and editing images and videos" This course appeared as a separate, full-fledged product as a result of the selection of the best knowledge by experts in the field of AI technologies from my team leading the development of the platform Aladamus.com.
After completing the training, you will gain the knowledge and skills to use the most powerful free as well as paid neural networks for professional improvement, image creation and editing. Neural networks analyzed in the course:

  • Free of charge:
    They are installed on your computer and function using a graphics card. The work is unlimited and possible even without an internet connection:
    -Stable Diffusion (models SDXL, FLUX, etc.) in the ComfyUI interface.
  • Paid and partially paid:
    Provided through web versions on subscription terms:
    - Nana Banana (Gemini 2.5 Flash Image)
    - and other neural networks that you will learn about in the course

Thanks to the knowledge provided in the course, you will have an impressive level of graphics provided by neural networks.

This page that you are currently reading is a description of the course, we recommend that you open it on computer monitors, as this will allow you to fully see the examples of "before and after" images, having seen all the details. On a phone, due to the small screen, many important details of the images will go unnoticed.

Before going into detail about the skills and knowledge you acquire after completing the course, I’m going to tell you about the background to its creation. A year ago, in September 2024, I assigned a new task to my team of talented programmers, 3D artists, researchers, and others involved in the development of our marketplaces CGBandit.com CGBandit.com and Bendtrade.com, as well as the educational project CGBanditcourse.com. We have started daily work on developing our online service with artificial intelligence-based tools. Researchers and developers from the field of AI technologies were also involved in the team. The main goal that I set for myself and the team was to surpass the most popular web services with AI tools that allow us to simultaneously:

  • Increase image resolution (AI Upscale).
  • Improve the quality of image detail rendering (Image Enhancer.
  • Increase the effectiveness and visual appeal of the image (Image Enhancer.
  • Restore poor-quality or damaged images, transform the original image into an improved version (Image Enhancer.

After a year of daily hard work we managed to do it—we created a platform with AI tools that shows impressive results: “Image Enhancement and Transformation [AI Upscale]”— we named it Aladamus.com.

My 20 years of experience in 3D/2D graphics, my original developments and ideas, personal inventor’s experience, etc., combined with the emergent collective intelligence of the entire team have yielded results. Before starting to create our workflows used on the Aladamus.com platform, we reviewed and studied virtually the entire range of available free and paid developments, AI tools, and other functionality from enthusiasts, researchers, and companies stored in repositories on Github and other sources.

With a vast range of knowledge and practical experience in using the best and most effective workflows for free neural networks, weeding out useless junk from truly powerful combinations, and knowing from practical experience which nodes, models, and modules yield the best results for specific tasks, - we decided to delight our approximately 4,000 students (who have already completed courses from CGBandit in interior design, 3D visualization, and 3D modeling) with the release of a course on neural networks.

The knowledge and technologies provided in the course will be valuable not only for representatives of previously listed professions, but also for the widest possible audience. We decided to share the most valuable knowledge in the field of neural networks for free image enhancement, creation, and editing. This is the result of our team's selection of the best out of a huge array of information and tools, which will save you a lot of time and effort searching for the right material among the information clutter on the Internet.

An example of the level of graphics created by neural network, which you will have at your disposal after completing the course | Generation by Valentin Kuznetsov (@ValentinCGB)

After completing the course, you will have the skills, the knowledge, and the AI tools in the following 7 directions.

1 DIRECTION

[AI Upscale] Improve the details and increase the resolution of images for free: photos, generations, 3D renderings, illustrations using Stable Diffusion on SDXL, FLUX models, etc.

In this course, you will receive a free AI Upscale. You no longer need to pay $40 a month for online neural network services. Improve any images on your computer or video card as much as you want, as often as you want. The AI Upscale neural network tool allows you to increase image resolution with improved detail, enhancing image quality, adding detail, increasing effectiveness, improving light and shadow patterns, texture quality, adding sharpness and clarity, refining each pixel, and increasing the visual appeal of the image.

EXAMPLES
After
Before
After
Before
After
Before
After
Before
After
Before
After
Before
After
Before
After
Before
After
Before
After
Before
After
Before
After
Before
After
Before
After
Before
After
Before
After
Before
After
Before
After
Before
After
Before
After
Before
After
Before
After
Before
After
Before
After
Before
After
Before
After
Before
After
Before
After
Before
An example of the level of graphics created by neural network, which you will be able to create after completing the course | Generation by Valentin Kuznetsov (@ValentinCGB)

Example of improved generated images

Resolution before: 896x1347px
Resolution after: 3576x5376px
After
Before
Generation by Valentin Kuznetsov

Example of improvement in 3D rendering of our student's graduation project

This is a tasty morsel for 3D and CG artists. Improve your 3D visualizations and increase the image resolution. Where the render engine falls short in terms of effectiveness and photorealism, the neural network will fill in the gaps, add detail and improve the light and shadow pattern, sculpt the volume, and work on those objects that are usually resource-intensive to perfect using pure 3D graphics: fuzzy carpets, plaids, people, plants, water, natural landscapes, and much more. Thanks to the workflow and access to neural networks that you will receive, you can significantly increase the photorealism of your 3D visualizations improving and detailing your renders.

Resolution before: 1440x1800px
Resolution after: 2880x3600px
After
Before
2 DIRECTION

Free enhancement of 3D models of people in your 3D visualizations using Stable Diffusion on SDXL, FLUX, etc. models

Enhance 3D models of people to maximum photorealism in your 3D renders. Where the render engine falls short, AI technologies come to the rescue.

EXAMPLES
After
Before
After
Before
After
Before
3 DIRECTION

Free image generation based on your sketches, drawings, photographs, 3D renderers and any images, using a control card, and, if necessary, with the addition of a reference using Stable Diffusion on SDXL, FLUX, etc. models

EXAMPLES
After
Before
Form donor
Control map
Reference

Generation according to prompt, without a reference image example

Generation according to prompt, without a reference image example

Generation according to prompt, without a reference image example

After
Before
After
Before
After
Before
After
Before
After
Before
After
Before
After
Before
After
Before
After
Before
Form donor
Control map
Reference

Generation according to prompt, without a reference image example

After
Before
After
Before
After
Before
After
Before
After
Before
After
Before
After
Before
4 DIRECTION

INPAINT (insertion / inpainting) — the function of local image editing by mask — adding objects to an image by effect or reference

EXAMPLES OF INSERTION USING NANA BANANA (GEMINI 2.5 FLASH IMAGE)
Initial generation
The embedded image

Francois-Xavier Lalanne

Vitra - Akari 25N Floor Lamp

Promt generation, without a reference image example

Promt generation, without a reference image example

Promt generation, without a reference image example

Promt generation, without a reference image example

Promt generation, without a reference image example

Promt generation, without a reference image example

Adding new elements to an image based on a reference or a pure product
Adding new elements (cacti and chair replacement)
После
До
EXAMPLES OF INSERTION ON STABLE DIFFUSION

Thanks to this course you will master the technique of inpainting, which allows you to insert new elements into any part of an image (photo, 3D render, illustration, sketch).

Add anything you want: furniture, interior items, buildings, people, animals, accessories, or environmental details.

Work with control maps — set the general shape and geometry of the object.

Natural integration — the neural network “fits” new elements into the composition as if they had always been part of the image: it takes into account light, shadows, perspective, and atmosphere.

Unlimited creativity — change the interior of a room, expand the architectural landscape, bring photos to life, or add realistic details to your 3D renderings.

The result is a tool that allows you to not only edit, but also creatively and flexibly manipulate reality in an image.

After
Before
After
Before
After
Before
After
Before
After
Before
After
Before
After
Before
After
Before
After
Before
After
Before
After
Before
After
Before
After
Before
After
Before

As a bonus, the educational program will include additional master classes on Photoshop.

5 DIRECTION

NPAINT - mask insertion using a control card, based on your sketches, drawings, images, 3D readers, etc. using the free Stable Diffusion on SDXL, FLUX, etc. models

EXAMPLES
After
Before
After
Before
After
Before
After
Before
After
Before
6 DIRECTION

Creating images based on a text prompt [Promt] using free Stable Diffusion on SDXL, FLUX, and other paid neural networks

EXAMPLES
An example of the level of graphics created by neural network, which you will have at your disposal after completing the course | Generation by Valentin Kuznetsov (@ValentinCGB)

Create images based on text prompts — the neural network will convert your text-based description into a complete visual image. You formulate your idea with words, specifying details, style, atmosphere, color scheme, and even emotional components, etc., and the neural network turns these words into a unique visual result.

This approach has a number of key advantages:

  • Control over content: you specify what you want to see in the image, including details of objects, background, composition, and style.
  • Rapid prototyping and experimentation: you can create multiple versions of a single idea and test different styles and effects without spending time on manual drawing.
  • Creating unique content: each image is generated individually by a neural network, making it exclusive and protected in terms of copyright. You become the author of original content and eliminate the risk of copyright infringement.

Using text prompts turns image generation into an interactive creative process, where your words become a tool and the neural network becomes your “virtual artist.”

An example of the level of graphics created by neural network, which you will be able to create after completing the course | Generation by Valentin Kuznetsov (@ValentinCGB)
An example of the level of graphics created by neural network, which you will be able to create after completing the course | Generation by Valentin Kuznetsov (@ValentinCGB)

Social media – create stunning photos, vivid illustrations, creative posts, visuals for stories and covers that will grab your audience's attention.

Website and blog design – fill your pages with stylish banners, background images, and unique graphic elements.

Menus and promotional materials – generate attractive photos of dishes, promotional posters or brochures that will make your business stand out from the competition.

Covers and presentations – create memorable images for books, podcasts, albums, slides, or commercial proposals.

Versatile solutions – from profile images and logos to full-fledged art illustrations for printing.

An example of the level of graphics created by neural network, which you will be able to create after completing the course | Generation by Valentin Kuznetsov (@ValentinCGB)
An example of the level of graphics created by neural network, which you will be able to create after completing the course | Generation by Valentin Kuznetsov (@ValentinCGB)
An example of the level of graphics created by neural network, which you will be able to create after completing the course | Generation by Valentin Kuznetsov (@ValentinCGB)
An example of the level of graphics created by neural network, which you will be able to create after completing the course | Generation by Valentin Kuznetsov (@ValentinCGB)
An example of the level of graphics created by neural network, which you will be able to create after completing the course | Generation by Valentin Kuznetsov (@ValentinCGB)
An example of the level of graphics created by neural network, which you will be able to create after completing the course | Generation by Valentin Kuznetsov (@ValentinCGB)
7 DIRECTION

Free image generation based on a REFERENCE, an example image using Stable Diffusion on SDXL, FLUX, etc. models

Generate images based on a reference—a sample image—by orienting the new image you are creating to resemble the reference in terms of style, color, etc. You can achieve a high level of graphics quality with the help of workflows, templates, and the right knowledge.

EXAMPLES
Reference
Generated images based on the style of the reference image
An example of the level of graphics you will have after completing the course | Generation by Valentin Kuznetsov (@ValentinCGB)
Reference
Generated images based on the style of the reference image
An example of the level of graphics you will have after completing the course | Generation by Valentin Kuznetsov (@ValentinCGB)

The online course includes:

  • Access to the course's training videos on our online platform Nodsmap.com. You get 24/7 access to the course videos, training program structure, and related materials on our platform Nodsmap.com as long as you have an internet connection. You can watch the video lessons whenever it's convenient for you. The training program will be released gradually over a period of 30 days from the moment access is granted on the Nodsmap.com platform.
    We do not intend to deny you access to the course after purchase, the ability to review the training material, receive both free and paid updates, or restrict your access to the course materials. However, for legal security reasons, in the event of an extraordinary circumstance, we are forced to play it safe and specify that the course is valid for 2 years. We do not plan to disable your access to the training material two years after the date of purchase.
  • Access to both paid and free updates on video tutorials within the course structure, as well as all important supporting materials: updated workflow templates for neural networks in the ComfyUI interface and important related knowledge; our new developments in the form of new workflows for generating, editing, and improving images; important professional updates on new models and nodes that are important for keeping up with technological advances in AI technologies. We constantly monitor new updates and provide you with the latest and best workflows. Due to the rapid pace of AI technology development, it is not profitable for us to fall behind in progress. Thus, thanks to our platform on which the training material will be provided, you will be connected to a truly significant channel of relevant technological updates.
  • An activation key for video lessons that allows you to link the course to one device.
  • Additional bonuses. The knowledge and free AI tools in seven directions provided in the course will be more than enough to meet most people's needs. But for true AI enthusiasts, fans of AI technologies, and computer graphics perfectionists, - super-exclusive privileges will be available in our online service with AI tools at Aladamus.com. We really wanted to launch it in the fourth quarter of 2025, and we are making every effort to do so by continuing our daily teamwork. However, for legal security reasons, we cannot promise an exact date for the launch of the Aladamus.com platform at this time. When it is launched you will receive additional privileges: - 25% discount on the AI Upscale price on Aladamus.com, within a year from the launch date.
  • Bonus Photoshop tutorials.

Cost of the training course

Price: 571 USD (there is a 31% discount now) = 394 USD.

To purchase the course or to ask questions, please send a direct message to the author and director of the CGBandit educational project. Valentin Kuznetsov [contacts]

Curatorial support

Separate from access to the course, you can purchase our curatorial support either immediately or some time after your purchase.
The price of curatorial support is $185 for two months. And if you need more attention and assistance, you can always extend your support. Curatorial support includes:

  • Feedback in online support chats.
  • If necessary, we will call you and connect to your computer to resolve technical issues with the neural network.

System requirements

— For working with paid neural networks: Nana Banana (Gemini 2.5 Flash Image), any computer is suitable — generation takes place on the servers of companies, not on your device.

— For running Stable Diffusion SDXL and FLUX models at home, it is best to use:

  • NVIDIA RTX20xx series graphics card or higher, with 6 GB of video memory;
  • From 16 GB of RAM;
  • Disk space: from 50 GB

If you are unsure and need help understanding whether the neural network will work on your computer using the SDXL and FLUX models, please contact Valentin Kuznetsov [contact details], author and leader of the CGBandit educational project, for assistance.

If you have a weak graphics card and currently have no plans to upgrade it, we will tell you in the course how easy and simple it is to rent a computer with a good graphics card remotely.