Welcome everyone!
I spent the last week working on a project of Core Image under the mentorship of Paul Hudson.
I found it extremely interesting how Core image is integrated with SwiftUI.
There are four types of images -
SwiftUI Image
UI Image
CG Image
CI Image
UI Image, CG Image and CI Image are "pure data". So they can’t be displayed in a view. This is the data that you can manipulate which you can later present in the SwiftUI Image View.
Let’s have a brief introduction about these “data”.
UI Image is the most powerful image type which can work with any kind of images like png, svg and even animations.
CG Image comes from core graphics . It's purely a 2-D array of pixels.
CI Image comes from core image. This just stores the information about the image, but doesn’t display the pixels unless specifically asked. Apple calls it “image recipe”, not an actual image.
Now, i’ll be explaining the entire story in layman terms.
First, you need to load the image from asset catalog. To do this you load the image in UI Image using Initializer.
Then convert this UI Image to CI Image to start the manipulation ((by manipulation i mean filters lol))
Now, you need to create two things.
Core Image Context
Core Image Filters
The Context just converts CI Image to CG Image.
The filters are there to do the actual work of transforming image data(brightness , blurring, contrast ).
Then we will receive the output in CI Image. This we need to convert to SwiftUI Image.
Get ready to witness the shear play of these four image types.
Here, CG Image and CI Image are both optionals. So, the "fail" and "true" means when we call the data type, an image might or might not be present.
If you want to see the code for this, go checkout my GitHub Profile. This code is available in the repository named "InstaFilter".
Until next time,
I bid you goodbye ♡
September 6, 2024