An Image filter in iOS Swift example using CoreML – Style Art

CoreML Style Art library is used to apply image filters in iOS example. Style Art Image filter is a library that process images using CoreML with a set of pre trained machine learning models and convert them to Art style.

This tutorial will guide you step-by-step through the implementation of a customizable image filter, enabling you to apply stunning visual effects to your images with ease.

Learn the fundamental concepts of image filtering, explore the capabilities of CoreML, and leverage the flexibility of Swift programming language to develop a feature-rich filter in your iOS applications.

Enhance the visual appeal of your images by mastering image filtering techniques in iOS Swift with CoreML.

image filter in ios swift



  • iOS 11.0+
  • Xcode 9.0+, Swift 4+

Manual installation

Retrieve and transfer the ‘StyleArt’ folder to your Xcode project by downloading and dropping it in.

Usage of image Filter

StyleArt.shared.process(image: imageView.image, style: .Candy) { (result) in

Different Styles of Filters

An image filter example in iOS Swift that uses CoreML to apply a style transfer technique, commonly known as Style Art. To apply the different style of one image to another image and creating visually appealing & artistic results of image.

enum ArtStyle{
    case Mosaic
    case scream
    case Muse
    case Udanie
    case Candy
    case Feathers
  1. Importing CoreML Model: First, you’ll need to import the CoreML model that performs the style transfer. This model is typically trained on a combination of content and style images to learn the artistic style. You can find pre-trained style transfer models available on platforms like Apple’s CoreML Model Zoo or train your own models using frameworks like TensorFlow or PyTorch.
  2. Preparing Input and Output: To apply the style transfer, you’ll need to prepare the input and output image buffers. The input image is typically the content image you want to stylize, and the output image will store the stylized result.
  3. Loading and Configuring the Model: Load the CoreML model into your application and configure it for style transfer. This usually involves setting up the input and output specifications of the model.
  4. Preprocessing and Postprocessing: First pass the input image in the model to need to preprocess it. This could involve resizing, normalizing pixel values, or converting the image to a format acceptable by the model. Similarly, after obtaining the stylized output from the model, you may need to postprocess it to ensure it’s in a suitable format for display.
  5. Style Transfer: Pass the preprocessed input image to the model for style transfer. The Core ML model will apply the learned artistic style to the input image and generate the stylized output.
  6. Displaying the Result: Once you have the stylized output image, you can display it in your application’s user interface. You can render the output image in an image view or apply additional visual effects to enhance the artistic appearance.

Remember that implementing a complete image filter using Core ML for style transfer involves several steps and considerations.

You’ll need to handle the model loading, input/output processing, and user interface integration in your application.

Furthermore, the intricacy of the style transfer model and the scale of the input images might require fine-tuning the performance of the style transfer algorithm, ensuring a seamless and delightful user experience that captivates your senses.


CoreML Image filter

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *