# Modellix ## Docs - [AI Try-On](https://docs.modellix.ai/alibaba/ai-try-on.md): A virtual try-on image generation model that generates try-on images based on portrait photos and clothing images. - [AI Try-On Parsing V1](https://docs.modellix.ai/alibaba/ai-try-on-parsing-v1.md): The aitryon-parsing-v1 supports segmentation of model images and clothing images, and can be used for pre-processing and post-processing of AI fitting room images. - [AI Try-On Plus](https://docs.modellix.ai/alibaba/ai-try-on-plus.md): Compared to the aitryon, there are improvements in image clarity, clothing texture details, and logo restoration effects, but the generation time is longer, making it suitable for scenarios where timeliness is not a high priority. - [AI Try-On Refiner](https://docs.modellix.ai/alibaba/ai-try-on-refiner.md): Perform secondary generation on the effect images created by AI virtual try-on, outputting finely polished virtual try-on effect images with higher fidelity. - [Image Outpainting](https://docs.modellix.ai/alibaba/image-outpainting.md): The image-out-painting allows for free image extension, supporting image rotation and expansion through both expansion coefficient and pixel count methods. Users can control image expansion by specifying width, height ratios or pixel values for left, right, top, and bottom extensions. Suitable for creative entertainment, assisted drawing, graphic design, and post-production in film and television. - [Qwen Image](https://docs.modellix.ai/alibaba/qwen-image.md): The qwen-image excels in text rendering, particularly for Chinese text. Currently, `qwen-image-plus` and `qwen-image` have the same capabilities, but `qwen-image-plus` is more cost-effective. - [Qwen Image Edit](https://docs.modellix.ai/alibaba/qwen-image-edit.md): The qwen-image-edit supports precise bilingual Chinese-English text editing, color adjustment, detail enhancement, style transfer, object addition and removal, and other operations, enabling complex image and text editing. - [Qwen Image Edit Plus](https://docs.modellix.ai/alibaba/qwen-image-edit-plus.md): The qwen-image-edit supports precise bilingual Chinese-English text editing, color adjustment, detail enhancement, style transfer, object addition and removal, and other operations, enabling complex image and text editing. - [Qwen Image Edit Plus 2025-10-30](https://docs.modellix.ai/alibaba/qwen-image-edit-plus-2025-10-30.md): The qwen-image-edit supports precise bilingual Chinese-English text editing, color adjustment, detail enhancement, style transfer, object addition and removal, and other operations, enabling complex image and text editing. - [Qwen Image Edit Plus 2025-12-15](https://docs.modellix.ai/alibaba/qwen-image-edit-plus-2025-12-15.md): The qwen-image-edit supports precise bilingual Chinese-English text editing, color adjustment, detail enhancement, style transfer, object addition and removal, and other operations, enabling complex image and text editing. - [Qwen Image Plus](https://docs.modellix.ai/alibaba/qwen-image-plus.md): The qwen-image excels in text rendering, particularly for Chinese text. Currently, `qwen-image-plus` and `qwen-image` have the same capabilities, but `qwen-image-plus` is more cost-effective. - [Qwen MT Image](https://docs.modellix.ai/alibaba/qwen-mt-image.md): The qwen-image-translate supports translating text from images in 11 languages into Chinese or English, accurately preserving original layout and content information, and provides custom features such as terminology definitions, sensitive word filtering, and image subject detection. - [Wan 2.2 I2V Flash](https://docs.modellix.ai/alibaba/wan-2-2-i2v-flash.md): The Wan image-to-video model can generate videos using prompts and image references, presenting rich artistic styles and cinematic-quality visuals. Wan 2.2 Flash features ultimate generation speed, with more accurate instruction understanding and camera control, consistent visual elements, and comprehensively improved stability and success rates. - [Wan 2.2 I2V Plus](https://docs.modellix.ai/alibaba/wan-2-2-i2v-plus.md): The Wan image-to-video model can generate videos using prompts and image references, presenting rich artistic styles and cinematic-quality visuals. Wan 2.2 Plus features more accurate instruction understanding, controllable camera movements, consistent visual elements, and comprehensively improved stability and success rates, delivering richer generated content. - [Wan 2.2 KF2V Flash](https://docs.modellix.ai/alibaba/wan-2-2-kf2v-flash.md): The Wan First-and-Last-Frame Video Generation Model: simply provide the first and last frame images, and it can generate a smooth, fluid dynamic video based on the prompt. - [Wan 2.2 T2I Flash](https://docs.modellix.ai/alibaba/wan-2-2-t2i-flash.md): The Wan text-to-image model generates beautiful images from text. The wan2.2-t2i-flash has been comprehensively upgraded in creativity, stability, and writing texture. - [Wan 2.2 T2I Plus](https://docs.modellix.ai/alibaba/wan-2-2-t2i-plus.md): The Wan text-to-image model generates beautiful images from text. The wan2.2-t2i-plus has been comprehensively upgraded in creativity, stability, and writing texture. - [Wan 2.2 T2V Plus](https://docs.modellix.ai/alibaba/wan-2-2-t2v-plus.md): The Wan test-to-video model can generate videos from a single sentence, presenting rich artistic styles and cinematic quality. Wan 2.2 features more accurate instruction understanding, stable and smooth motion generation, and richer details. - [Wan 2.5 I2I Preview](https://docs.modellix.ai/alibaba/wan-2-5-i2i-preview.md): The wan2.5-i2i-preview supports inputting text, single image, or multiple images to achieve capabilities such as image editing based on subject consistency and multi-image fusion creation. - [Wan 2.5 T2I Preview](https://docs.modellix.ai/alibaba/wan-2-5-t2i-preview.md): The Wan text-to-image model generates beautiful images from text. The wan2.5-t2i-preview has removed the unilateral restriction, allowing free size selection within the total pixel area and aspect ratio constraints. - [Wan 2.5 T2V Preview](https://docs.modellix.ai/alibaba/wan-2-5-t2v-preview.md): The Wan test-to-video model can generate videos from a single sentence, presenting rich artistic styles and cinematic quality. Wan 2.5 supports automatic dubbing and uploading custom audio files. - [Wan 2.6 I2V](https://docs.modellix.ai/alibaba/wan-2-6-i2v.md): The Wan image-to-video model can generate videos using prompts and image references, featuring rich artistic styles and cinematic quality. Wan 2.6 introduces multi-shot narrative capabilities and supports both automatic dubbing and uploading custom audio files. - [Wan 2.6 I2V Flash](https://docs.modellix.ai/alibaba/wan-2-6-i2v-flash.md): The Wan image-to-video model can generate videos using prompts and image references, featuring rich artistic styles and cinematic quality. Wan 2.6 introduces multi-shot narrative capabilities and supports both automatic dubbing and uploading custom audio files. - [Wan 2.6 Image](https://docs.modellix.ai/alibaba/wan-2-6-image.md): The wan-2.6-image supports image editing and mixed text-image output, meeting diverse generation and integration needs. - [Wan 2.6 T2I](https://docs.modellix.ai/alibaba/wan-2-6-t2i.md): The wan2.6-t2i supports the newly added synchronization interface, while allowing free selection of dimensions within the constraints of total pixel area and aspect ratio. - [Wan 2.6 T2V](https://docs.modellix.ai/alibaba/wan-2-6-t2v.md): The Wan test-to-video model can generate videos from a single sentence, presenting rich artistic styles and cinematic quality. Wan 2.6 introduces multi-shot narrative capabilities and supports both automatic dubbing and uploading custom audio files. - [Wanx 2.0 T2I Turbo](https://docs.modellix.ai/alibaba/wanx-2-0-t2i-turbo.md): The Wan text-to-image model generates beautiful images from text. The wanx2.0-t2i-turbo excels in textured portraits and creative design, offering great value for money. - [Wanx 2.1 I2V Plus](https://docs.modellix.ai/alibaba/wanx-2-1-i2v-plus.md): The Wan image-to-video model can generate videos using prompts and image references, presenting rich artistic styles and cinematic-quality visuals. Wanx 2.1 Plus offers even more refined image quality. - [Wanx 2.1 I2V Turbo](https://docs.modellix.ai/alibaba/wanx-2-1-i2v-turbo.md): The Wan image-to-video model can generate videos using prompts and image references, featuring rich artistic styles and cinematic-quality visuals. Wanx 2.1 Turbo offers high cost-effectiveness. - [Wanx 2.1 Image Edit](https://docs.modellix.ai/alibaba/wanx-2-1-image-edit.md): The wanx2.1-imageedit can achieve diverse image editing through simple instructions, suitable for scenarios such as image expansion, watermark removal, style transfer, image restoration, and image enhancement. - [Wanx 2.1 KF2V Plus](https://docs.modellix.ai/alibaba/wanx-2-1-kf2v-plus.md): The Wan First-and-Last-Frame Video Generation Model: simply provide the first and last frame images, and it can generate a smooth, fluid dynamic video based on the prompt. - [Wanx 2.1 T2I Plus](https://docs.modellix.ai/alibaba/wanx-2-1-t2i-plus.md): The Wan text-to-image model generates beautiful images from text. The wanx2.1-t2i-plus supports multiple styles and generates images with rich details. - [Wanx 2.1 T2I Turbo](https://docs.modellix.ai/alibaba/wanx-2-1-t2i-turbo.md): The Wan text-to-image model generates beautiful images from text. The wanx2.1-t2i-turbo supports multiple styles and generates quickly. - [Wanx 2.1 T2V Plus](https://docs.modellix.ai/alibaba/wanx-2-1-t2v-plus.md): Wan test-to-video model can generate videos from a single sentence, featuring rich artistic styles and cinematic quality. Wanx 2.1 Plus offers even more refined visuals. - [Wanx 2.1 T2V Turbo](https://docs.modellix.ai/alibaba/wanx-2-1-t2v-turbo.md): Wan test-to-video model can generate videos with a single sentence, featuring rich artistic styles and cinematic quality. Wanx 2.1 Turbo offers high cost-effectiveness. - [Wanx 2.5 I2V Preview](https://docs.modellix.ai/alibaba/wanx-2-5-i2v-preview.md): The Wan image-to-video model can generate videos using prompts and image references, featuring rich artistic styles and cinematic quality. Wan 2.5 supports automatic dubbing and uploading custom audio files. - [Wanx Background Generation V2](https://docs.modellix.ai/alibaba/wanx-background-generation-v2.md): The wanx-background-generation-v2 can expand and generate background information based on input foreground image materials, achieving natural light and shadow fusion effects, as well as delicate and realistic image generation. It supports various methods such as text descriptions and image guidance, while also supporting the intelligent addition of text content to generated images. - [Wanx Sketch to Image Lite](https://docs.modellix.ai/alibaba/wanx-sketch-to-image-lite.md): Based on input hand-drawn sketches and text descriptions, exquisite doodle artworks can be generated. - [Wanx Style Repaint V1](https://docs.modellix.ai/alibaba/wanx-style-repaint-v1.md): The wanx-style-repaint-v1 can perform various stylized redraws on input portrait images, allowing the newly generated images to maintain the original facial features while presenting different artistic painting effects. - [WordArt Semantic](https://docs.modellix.ai/alibaba/wordart-semantic.md): The wordart-semantic can creatively deform the edge contours of input text based on prompt content, achieving more creative uses of a font, and returns a black-background white mask image containing the text. - [Wordart Texture](https://docs.modellix.ai/alibaba/wordart-texture.md): The wordart-texture can perform creative design on input text content or text images, adding materials and textures to the text based on prompt content to achieve effects such as 3D prominence or scene integration, generating exquisite and stylistically diverse artistic text that can be directly used as text posters when combined with backgrounds. - [Query Task Result](https://docs.modellix.ai/api-reference/query-task-result.md): Query the status and results of an async task by task_id - [Claude Code setup](https://docs.modellix.ai/archieved/ai-tools/claude-code.md): Configure Claude Code for your documentation workflow - [Cursor setup](https://docs.modellix.ai/archieved/ai-tools/cursor.md): Configure Cursor for your documentation workflow - [Windsurf setup](https://docs.modellix.ai/archieved/ai-tools/windsurf.md): Configure Windsurf for your documentation workflow - [Authentication](https://docs.modellix.ai/archieved/authentication.md) - [Development](https://docs.modellix.ai/archieved/development.md): Preview changes locally to update your docs - [Code blocks](https://docs.modellix.ai/archieved/essentials/code.md): Display inline code and code blocks - [Images and embeds](https://docs.modellix.ai/archieved/essentials/images.md): Add image, video, and other HTML elements - [Markdown syntax](https://docs.modellix.ai/archieved/essentials/markdown.md): Text, title, and styling in standard markdown - [Navigation](https://docs.modellix.ai/archieved/essentials/navigation.md): The navigation field in docs.json defines the pages that go in the navigation menu - [Reusable snippets](https://docs.modellix.ai/archieved/essentials/reusable-snippets.md): Reusable, custom snippets to keep content in sync - [Global Settings](https://docs.modellix.ai/archieved/essentials/settings.md): Mintlify gives you complete control over the look and feel of your documentation using the docs.json file - [Introduction](https://docs.modellix.ai/archieved/introduction.md): Example section for showcasing API endpoints - [API Overview](https://docs.modellix.ai/archieved/overview.md): Comprehensive overview of the Prediction API including endpoints, authentication, and response formats - [Quickstart](https://docs.modellix.ai/archieved/quickstart.md): Start building awesome documentation in minutes - [null](https://docs.modellix.ai/archieved/snippets/snippet-intro.md) - [Seedance 1.0 Lite I2V](https://docs.modellix.ai/bytedance/seedance-1-0-lite-i2v.md): ByteDance's small-parameter version of the video generation model achieves excellent video generation quality while significantly increasing generation speed, balancing both effect and efficiency. - [Seedance 1.0 Lite T2V](https://docs.modellix.ai/bytedance/seedance-1-0-lite-t2v.md): ByteDance's small-parameter version of the video generation model achieves excellent video generation quality while significantly increasing generation speed, balancing both effect and efficiency. - [Seedance 1.0 Pro Fast I2V](https://docs.modellix.ai/bytedance/seedance-1-0-pro-fast-i2v.md): Seedance 1.0 pro fast, inheriting the core advantages of the Seedance 1.0 pro model, has a 3x faster generation speed and a 72% lower price. It is a video generation model that achieves an excellent balance among quality, speed, and cost. - [Seedance 1.0 Pro Fast T2V](https://docs.modellix.ai/bytedance/seedance-1-0-pro-fast-t2v.md): Seedance 1.0 pro fast, inheriting the core advantages of the Seedance 1.0 pro model, has a 3x faster generation speed and a 72% lower price. It is a video generation model that achieves an excellent balance among quality, speed, and cost. - [Seedance 1.0 Pro I2V](https://docs.modellix.ai/bytedance/seedance-1-0-pro-i2v.md): Seedance 1.0 is a video generation foundation model launched by ByteDance. As the large-parameter version of this model series, Seedance 1.0 Pro has unique multi-shot narrative capabilities and performs excellently across all dimensions. It has made breakthroughs in semantic understanding and instruction-following capabilities, and can generate 1080P high-definition videos that are smooth in motion, rich in details, diverse in style, and have cinematic-level aesthetics. - [Seedance 1.0 Pro T2V](https://docs.modellix.ai/bytedance/seedance-1-0-pro-t2v.md): Seedance 1.0 is a video generation foundation model launched by ByteDance. As the large-parameter version of this model series, Seedance 1.0 Pro has unique multi-shot narrative capabilities and performs excellently across all dimensions. It has made breakthroughs in semantic understanding and instruction-following capabilities, and can generate 1080P high-definition videos that are smooth in motion, rich in details, diverse in style, and have cinematic-level aesthetics. - [Seedance 1.5 Pro I2V](https://docs.modellix.ai/bytedance/seedance-1-5-pro-i2v.md): Seedance 1.5 pro is ByteDance's new professional-grade audio-visual co-generation model.It builds on multi-shot narrative and HD generation capabilities, supporting integrated audio and video output for a unified creation experience (visuals, human voice, music, and sound effects).The model includes a start/end frame feature, allowing creators to lock the video's style, composition, and characters by setting the first and last frames, which then drives the generation of smooth, dynamic video. This significantly enhances the efficiency, controllability, and artistic expressiveness of professional video creation. - [Seedance 1.5 Pro T2V](https://docs.modellix.ai/bytedance/seedance-1-5-pro-t2v.md): Seedance 1.5 pro is ByteDance's new professional-grade audio-visual co-generation model.It builds on multi-shot narrative and HD generation capabilities, supporting integrated audio and video output for a unified creation experience (visuals, human voice, music, and sound effects).The model includes a start/end frame feature, allowing creators to lock the video's style, composition, and characters by setting the first and last frames, which then drives the generation of smooth, dynamic video. This significantly enhances the efficiency, controllability, and artistic expressiveness of professional video creation. - [Seededit 3.0 I2I](https://docs.modellix.ai/bytedance/seededit-3-0-i2i.md): SeedEdit 3.0 is an image editing model that supports editing images via text instructions. SeedEdit 3.0 is trained based on the text-to-image model Seedream 3.0, integrated with diverse data fusion methods and specific reward models. Its ability to preserve image subjects, backgrounds, and details has been further improved, especially in scenarios such as portrait editing, background modification, perspective and light conversion. - [Seedream 3.0 T2I](https://docs.modellix.ai/bytedance/seedream-3-0-t2i.md): Seedream 3.0 is a Chinese-English bilingual image generation foundation model that supports native high resolution. Its overall capabilities are comparable to GPT-4o, ranking it among the world's top tier. Faster response speed; more accurate small text generation and enhanced text typesetting effect; strong instruction-following ability, improved aesthetics & structure, and good fidelity and detail performance. - [Seedream 4.0 I2I](https://docs.modellix.ai/bytedance/seedream-4-0-i2i.md): A SOTA-level multimodal image creation model based on a leading architecture. It breaks the creative boundaries of traditional text-to-image models and natively supports text, single-image, and multi-image inputs. Users can freely fuse text and images, and in the same model, realize diverse applications like multi-image fusion creation based on subject consistency, image editing, and group image generation—enabling more free and controllable image creation. - [Seedream 4.0 T2I](https://docs.modellix.ai/bytedance/seedream-4-0-t2i.md): A SOTA-level multimodal image creation model based on a leading architecture. It breaks the creative boundaries of traditional text-to-image models and natively supports text, single-image, and multi-image inputs. Users can freely fuse text and images, and in the same model, realize diverse applications like multi-image fusion creation based on subject consistency, image editing, and group image generation—enabling more free and controllable image creation. - [Seedream 4.5 I2I](https://docs.modellix.ai/bytedance/seedream-4-5-i2i.md): Seedream 4.5 is the latest in-house image generation model developed by ByteDance. Compared with Seedream 4.0, it delivers comprehensive improvements—especially in editing consistency, including better preservation of subject details, lighting, and color tone. It also enhances portrait refinement and small-text rendering. The model’s multi-image composition capabilities have been significantly strengthened, and both reasoning performance and visual aesthetics continue to advance, enabling more accurate and artistically expressive image generation. - [Seedream 4.5 T2I](https://docs.modellix.ai/bytedance/seedream-4-5-t2i.md): Seedream 4.5 is the latest in-house image generation model developed by ByteDance. Compared with Seedream 4.0, it delivers comprehensive improvements—especially in editing consistency, including better preservation of subject details, lighting, and color tone. It also enhances portrait refinement and small-text rendering. The model’s multi-image composition capabilities have been significantly strengthened, and both reasoning performance and visual aesthetics continue to advance, enabling more accurate and artistically expressive image generation. - [New Models](https://docs.modellix.ai/changelog/new-models.md): The model integration updates and announcements. - [Product Updates](https://docs.modellix.ai/changelog/product-updates.md): The product updates and announcements. - [Overview](https://docs.modellix.ai/get-started/index.md): Welcome to Modellix. - [Pricing](https://docs.modellix.ai/get-started/pricing.md): The pricing of each model in [Modellix](https://modellix.ai). - [Hailuo 02 FL2V](https://docs.modellix.ai/minimax/hailuo-02-fl2v.md): Hailuo 02's FL2V function provides unprecedented creative control by generating dynamic videos between a user-defined start and end frame. This feature not only masters extreme physics and complex transitions but also enables the novel capability to deduce a story leading up to a specified final image. - [Hailuo 02 I2V](https://docs.modellix.ai/minimax/hailuo-02-i2v.md): Hailuo 02 masters both text-to-video and image-to-video generation with exceptional instruction following, while setting a new standard in visual realism through its extreme physics simulation. - [Hailuo 02 T2V](https://docs.modellix.ai/minimax/hailuo-02-t2v.md): Hailuo 02 masters both text-to-video and image-to-video generation with exceptional instruction following, while setting a new standard in visual realism through its extreme physics simulation. - [Hailuo 2.3 Fast I2V](https://docs.modellix.ai/minimax/hailuo-2-3-fast-i2v.md): Hailuo 2.3 Fast efficiently transforms images into dynamic videos with extreme physics mastery. It delivers exceptional value by generating high-quality, realistic motion at a reduced computational cost. - [Hailuo 2.3 I2V](https://docs.modellix.ai/minimax/hailuo-2-3-i2v.md): Hailuo 2.3 not only generates high-quality videos from text or images with exceptional instruction following, but also redefines realism through its state-of-the-art mastery of extreme physics. - [Hailuo 2.3 T2V](https://docs.modellix.ai/minimax/hailuo-2-3-t2v.md): Hailuo 2.3 not only generates high-quality videos from text or images with exceptional instruction following, but also redefines realism through its state-of-the-art mastery of extreme physics. - [MiniMax I2V-01](https://docs.modellix.ai/minimax/minimax-i2v-01.md): MiniMax I2V-01 is a foundational image-to-video model that converts static pictures into high-quality video sequences, delivering smooth animation especially optimized for illustrations and anime styles. - [MiniMax I2V-01-Director](https://docs.modellix.ai/minimax/minimax-i2v-01-director.md): T2V-01-Director is a text-to-video AI model that offers precise camera control, allowing users to create professional-looking video clips with cinematic movements through a variety of lens instructions. - [MiniMax I2V-01-Live](https://docs.modellix.ai/minimax/minimax-i2v-01-live.md): I2V-01-Live is an image-to-video model specifically optimized for animating 2D illustrations and cartoon styles, enhancing smoothness and vivid motion to bring static art to life with fluid character movements and natural expressions. - [MiniMax Image-01 I2I](https://docs.modellix.ai/minimax/minimax-image-01-i2i.md): MiniMax’s multimodal vision model that blends text-to-image generation with visual reasoning for seamless cross-modal tasks. - [MiniMax Image-01-Live I2I](https://docs.modellix.ai/minimax/minimax-image-01-live-i2i.md): MiniMax’s multimodal vision model that blends text-to-image generation with visual reasoning for seamless cross-modal tasks. - [MiniMax Image-01 T2I](https://docs.modellix.ai/minimax/minimax-image-01-t2i.md): MiniMax’s multimodal vision model that blends text-to-image generation with visual reasoning for seamless cross-modal tasks. - [MiniMax S2V-01](https://docs.modellix.ai/minimax/minimax-s2v-01.md): The MiniMax S2V-01 is a specialized subject reference video model designed to solve the industry challenge of character consistency. It can generate dynamic videos where the main character's identity stays highly consistent across every frame, using just a single photo as a reference and at a computational cost significantly lower than traditional solutions. - [MiniMax T2V-01](https://docs.modellix.ai/minimax/minimax-t2v-01.md): MiniMax T2V-01 is a text-to-video model that uniquely delivers professional-level camera movement control, transforming written prompts into cinematic video clips with dynamic shots. - [MiniMax T2V-01-Director](https://docs.modellix.ai/minimax/minimax-t2v-01-director.md): T2V-01-Director is a text-to-video AI model that offers precise camera control, allowing users to create professional-looking video clips with cinematic movements through a variety of lens instructions - [Agent Skill](https://docs.modellix.ai/ways-to-use/agent-skill.md) - [Error Handling](https://docs.modellix.ai/ways-to-use/error-handling.md): Learn about error codes, messages, and best practices for the Prediction API. - [MCP](https://docs.modellix.ai/ways-to-use/mcp.md): Modellix Docs MCP Server allows you to search the Modellix documentation in your MCP clients. - [Steps](https://docs.modellix.ai/ways-to-use/steps.md): The steps to use the Modellix models API, including how to get an API key, how to use the API, and how to get the result. ## OpenAPI Specs - [minimax-t2v](https://docs.modellix.ai/model-api/minimax/minimax-t2v.json) - [minimax-t2i](https://docs.modellix.ai/model-api/minimax/minimax-t2i.json) - [minimax-i2v](https://docs.modellix.ai/model-api/minimax/minimax-i2v.json) - [minimax-i2i](https://docs.modellix.ai/model-api/minimax/minimax-i2i.json) - [kling-t2v](https://docs.modellix.ai/model-api/kling/kling-t2v.json) - [kling-t2i](https://docs.modellix.ai/model-api/kling/kling-t2i.json) - [kling-i2v](https://docs.modellix.ai/model-api/kling/kling-i2v.json) - [kling-i2i](https://docs.modellix.ai/model-api/kling/kling-i2i.json) - [bytedance-t2v](https://docs.modellix.ai/model-api/bytedance/bytedance-t2v.json) - [bytedance-t2i](https://docs.modellix.ai/model-api/bytedance/bytedance-t2i.json) - [bytedance-i2v](https://docs.modellix.ai/model-api/bytedance/bytedance-i2v.json) - [bytedance-i2i](https://docs.modellix.ai/model-api/bytedance/bytedance-i2i.json) - [alibaba-t2v](https://docs.modellix.ai/model-api/alibaba/alibaba-t2v.json) - [alibaba-t2i](https://docs.modellix.ai/model-api/alibaba/alibaba-t2i.json) - [alibaba-i2v](https://docs.modellix.ai/model-api/alibaba/alibaba-i2v.json) - [alibaba-i2i](https://docs.modellix.ai/model-api/alibaba/alibaba-i2i.json) - [query-task-result](https://docs.modellix.ai/common-api/query-task-result.json) - [openapi](https://docs.modellix.ai/api-reference/openapi.json) ## Optional - [Support](mailto:support@modellix.ai) - [Community](https://discord.gg/N2FbcB2cZT)