添加美颜相机

This commit is contained in:
PengGJ 2025-04-19 16:54:21 +08:00
parent 11bdd5852a
commit 4c6a41a8be
390 changed files with 39037 additions and 3421 deletions

View File

@ -53,6 +53,7 @@ pod 'IJKMediaFramework'
#pod 'ICGVideoTrimmer'
pod 'GPUImage', '~> 0.1.7'
post_install do |installer|
installer.generated_projects.each do |project|

View File

@ -15,6 +15,7 @@ PODS:
- AFNetworking/UIKit (4.0.1):
- AFNetworking/NSURLSession
- DZNEmptyDataSet (1.8.1)
- GPUImage (0.1.7)
- GTCommonSDK (3.1.3.0):
- ZXSDK
- GTExtensionSDK (3.0.3)
@ -54,6 +55,7 @@ PODS:
DEPENDENCIES:
- AFNetworking
- DZNEmptyDataSet
- GPUImage (~> 0.1.7)
- GTExtensionSDK
- GTSDK
- GYSDK
@ -84,6 +86,7 @@ SPEC REPOS:
https://github.com/CocoaPods/Specs.git:
- AFNetworking
- DZNEmptyDataSet
- GPUImage
- GTCommonSDK
- GTExtensionSDK
- GTSDK
@ -115,6 +118,7 @@ SPEC REPOS:
SPEC CHECKSUMS:
AFNetworking: 3bd23d814e976cd148d7d44c3ab78017b744cd58
DZNEmptyDataSet: 9525833b9e68ac21c30253e1d3d7076cc828eaa7
GPUImage: 733a5f0fab92df9de1c37ba9df520a833ccb406d
GTCommonSDK: 4eccd0c8593cc8f458976463e25be8a733896a05
GTExtensionSDK: 9a5008e262732a2d3f58f3aee1a3e5a91ee20e1d
GTSDK: b002384999146d5b86f1c9b56e20882ffbfd2796
@ -143,6 +147,6 @@ SPEC CHECKSUMS:
YYText: 5c461d709e24d55a182d1441c41dc639a18a4849
ZXSDK: 414bed508b670437a9bdf7c75d49816b8cb7b2d4
PODFILE CHECKSUM: c859fd90f543aa4ab097e8944bcea05de8d46d8f
PODFILE CHECKSUM: 1554ca8f2595affc7e6b34bd057cd94aede8c771
COCOAPODS: 1.16.2

9
ProductApp/Pods/GPUImage/License.txt generated Executable file
View File

@ -0,0 +1,9 @@
Copyright (c) 2012, Brad Larson, Ben Cochran, Hugues Lismonde, Keitaroh Kobayashi, Alaric Cole, Matthew Clark, Jacob Gundersen, Chris Williams.
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
Neither the name of the GPUImage framework nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

733
ProductApp/Pods/GPUImage/README.md generated Executable file
View File

@ -0,0 +1,733 @@
# GPUImage #
<div style="float: right"><img src="http://sunsetlakesoftware.com/sites/default/files/GPUImageLogo.png" /></div>
<a href="https://zenodo.org/record/10416#.U5YGaF773Md"><img src="https://zenodo.org/badge/doi/10.5281/zenodo.10416.png" /></a>
Brad Larson
http://www.sunsetlakesoftware.com
[@bradlarson](http://twitter.com/bradlarson)
contact@sunsetlakesoftware.com
## Overview ##
The GPUImage framework is a BSD-licensed iOS library that lets you apply GPU-accelerated filters and other effects to images, live camera video, and movies. In comparison to Core Image (part of iOS 5.0), GPUImage allows you to write your own custom filters, supports deployment to iOS 4.0, and has a simpler interface. However, it currently lacks some of the more advanced features of Core Image, such as facial detection.
For massively parallel operations like processing images or live video frames, GPUs have some significant performance advantages over CPUs. On an iPhone 4, a simple image filter can be over 100 times faster to perform on the GPU than an equivalent CPU-based filter.
However, running custom filters on the GPU requires a lot of code to set up and maintain an OpenGL ES 2.0 rendering target for these filters. I created a sample project to do this:
http://www.sunsetlakesoftware.com/2010/10/22/gpu-accelerated-video-processing-mac-and-ios
and found that there was a lot of boilerplate code I had to write in its creation. Therefore, I put together this framework that encapsulates a lot of the common tasks you'll encounter when processing images and video and made it so that you don't need to care about the OpenGL ES 2.0 underpinnings.
This framework compares favorably to Core Image when handling video, taking only 2.5 ms on an iPhone 4 to upload a frame from the camera, apply a gamma filter, and display, versus 106 ms for the same operation using Core Image. CPU-based processing takes 460 ms, making GPUImage 40X faster than Core Image for this operation on this hardware, and 184X faster than CPU-bound processing. On an iPhone 4S, GPUImage is only 4X faster than Core Image for this case, and 102X faster than CPU-bound processing. However, for more complex operations like Gaussian blurs at larger radii, Core Image currently outpaces GPUImage.
## License ##
BSD-style, with the full license available with the framework in License.txt.
## Technical requirements ##
- OpenGL ES 2.0: Applications using this will not run on the original iPhone, iPhone 3G, and 1st and 2nd generation iPod touches
- iOS 4.1 as a deployment target (4.0 didn't have some extensions needed for movie reading). iOS 4.3 is needed as a deployment target if you wish to show live video previews when taking a still photo.
- iOS 5.0 SDK to build
- Devices must have a camera to use camera-related functionality (obviously)
- The framework uses automatic reference counting (ARC), but should support projects using both ARC and manual reference counting if added as a subproject as explained below. For manual reference counting applications targeting iOS 4.x, you'll need add -fobjc-arc to the Other Linker Flags for your application project.
## General architecture ##
GPUImage uses OpenGL ES 2.0 shaders to perform image and video manipulation much faster than could be done in CPU-bound routines. However, it hides the complexity of interacting with the OpenGL ES API in a simplified Objective-C interface. This interface lets you define input sources for images and video, attach filters in a chain, and send the resulting processed image or video to the screen, to a UIImage, or to a movie on disk.
Images or frames of video are uploaded from source objects, which are subclasses of GPUImageOutput. These include GPUImageVideoCamera (for live video from an iOS camera), GPUImageStillCamera (for taking photos with the camera), GPUImagePicture (for still images), and GPUImageMovie (for movies). Source objects upload still image frames to OpenGL ES as textures, then hand those textures off to the next objects in the processing chain.
Filters and other subsequent elements in the chain conform to the GPUImageInput protocol, which lets them take in the supplied or processed texture from the previous link in the chain and do something with it. Objects one step further down the chain are considered targets, and processing can be branched by adding multiple targets to a single output or filter.
For example, an application that takes in live video from the camera, converts that video to a sepia tone, then displays the video onscreen would set up a chain looking something like the following:
GPUImageVideoCamera -> GPUImageSepiaFilter -> GPUImageView
## Adding the static library to your iOS project ##
Note: if you want to use this in a Swift project, you need to use the steps in the "Adding this as a framework" section instead of the following. Swift needs modules for third-party code.
Once you have the latest source code for the framework, it's fairly straightforward to add it to your application. Start by dragging the GPUImage.xcodeproj file into your application's Xcode project to embed the framework in your project. Next, go to your application's target and add GPUImage as a Target Dependency. Finally, you'll want to drag the libGPUImage.a library from the GPUImage framework's Products folder to the Link Binary With Libraries build phase in your application's target.
GPUImage needs a few other frameworks to be linked into your application, so you'll need to add the following as linked libraries in your application target:
- CoreMedia
- CoreVideo
- OpenGLES
- AVFoundation
- QuartzCore
You'll also need to find the framework headers, so within your project's build settings set the Header Search Paths to the relative path from your application to the framework/ subdirectory within the GPUImage source directory. Make this header search path recursive.
To use the GPUImage classes within your application, simply include the core framework header using the following:
#import "GPUImage.h"
As a note: if you run into the error "Unknown class GPUImageView in Interface Builder" or the like when trying to build an interface with Interface Builder, you may need to add -ObjC to your Other Linker Flags in your project's build settings.
Also, if you need to deploy this to iOS 4.x, it appears that the current version of Xcode (4.3) requires that you weak-link the Core Video framework in your final application or you see crashes with the message "Symbol not found: _CVOpenGLESTextureCacheCreate" when you create an archive for upload to the App Store or for ad hoc distribution. To do this, go to your project's Build Phases tab, expand the Link Binary With Libraries group, and find CoreVideo.framework in the list. Change the setting for it in the far right of the list from Required to Optional.
Additionally, this is an ARC-enabled framework, so if you want to use this within a manual reference counted application targeting iOS 4.x, you'll need to add -fobjc-arc to your Other Linker Flags as well.
### Building a static library at the command line ###
If you don't want to include the project as a dependency in your application's Xcode project, you can build a universal static library for the iOS Simulator or device. To do this, run `build.sh` at the command line. The resulting library and header files will be located at `build/Release-iphone`. You may also change the version of the iOS SDK by changing the `IOSSDK_VER` variable in `build.sh` (all available versions can be found using `xcodebuild -showsdks`).
## Adding this as a framework (module) to your Mac or iOS project ##
Xcode 6 and iOS 8 support the use of full frameworks, as does the Mac, which simplifies the process of adding this to your application. To add this to your application, I recommend dragging the .xcodeproj project file into your application's project (as you would in the static library target).
For your application, go to its target build settings and choose the Build Phases tab. Under the Target Dependencies grouping, add GPUImageFramework on iOS (not GPUImage, which builds the static library) or GPUImage on the Mac. Under the Link Binary With Libraries section, add GPUImage.framework.
This should cause GPUImage to build as a framework. Under Xcode 6, this will also build as a module, which will allow you to use this in Swift projects. When set up as above, you should just need to use
import GPUImage
to pull it in.
You then need to add a new Copy Files build phase, set the Destination to Frameworks, and add the GPUImage.framework build product to that. This will allow the framework to be bundled with your application (otherwise, you'll see cryptic "dyld: Library not loaded: @rpath/GPUImage.framework/GPUImage" errors on execution).
### Documentation ###
Documentation is generated from header comments using appledoc. To build the documentation, switch to the "Documentation" scheme in Xcode. You should ensure that "APPLEDOC_PATH" (a User-Defined build setting) points to an appledoc binary, available on <a href="https://github.com/tomaz/appledoc">Github</a> or through <a href="https://github.com/mxcl/homebrew">Homebrew</a>. It will also build and install a .docset file, which you can view with your favorite documentation tool.
## Performing common tasks ##
### Filtering live video ###
To filter live video from an iOS device's camera, you can use code like the following:
GPUImageVideoCamera *videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
GPUImageFilter *customFilter = [[GPUImageFilter alloc] initWithFragmentShaderFromFile:@"CustomShader"];
GPUImageView *filteredVideoView = [[GPUImageView alloc] initWithFrame:CGRectMake(0.0, 0.0, viewWidth, viewHeight)];
// Add the view somewhere so it's visible
[videoCamera addTarget:customFilter];
[customFilter addTarget:filteredVideoView];
[videoCamera startCameraCapture];
This sets up a video source coming from the iOS device's back-facing camera, using a preset that tries to capture at 640x480. This video is captured with the interface being in portrait mode, where the landscape-left-mounted camera needs to have its video frames rotated before display. A custom filter, using code from the file CustomShader.fsh, is then set as the target for the video frames from the camera. These filtered video frames are finally displayed onscreen with the help of a UIView subclass that can present the filtered OpenGL ES texture that results from this pipeline.
The fill mode of the GPUImageView can be altered by setting its fillMode property, so that if the aspect ratio of the source video is different from that of the view, the video will either be stretched, centered with black bars, or zoomed to fill.
For blending filters and others that take in more than one image, you can create multiple outputs and add a single filter as a target for both of these outputs. The order with which the outputs are added as targets will affect the order in which the input images are blended or otherwise processed.
Also, if you wish to enable microphone audio capture for recording to a movie, you'll need to set the audioEncodingTarget of the camera to be your movie writer, like for the following:
videoCamera.audioEncodingTarget = movieWriter;
### Capturing and filtering a still photo ###
To capture and filter still photos, you can use a process similar to the one for filtering video. Instead of a GPUImageVideoCamera, you use a GPUImageStillCamera:
stillCamera = [[GPUImageStillCamera alloc] init];
stillCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
filter = [[GPUImageGammaFilter alloc] init];
[stillCamera addTarget:filter];
GPUImageView *filterView = (GPUImageView *)self.view;
[filter addTarget:filterView];
[stillCamera startCameraCapture];
This will give you a live, filtered feed of the still camera's preview video. Note that this preview video is only provided on iOS 4.3 and higher, so you may need to set that as your deployment target if you wish to have this functionality.
Once you want to capture a photo, you use a callback block like the following:
[stillCamera capturePhotoProcessedUpToFilter:filter withCompletionHandler:^(UIImage *processedImage, NSError *error){
NSData *dataForJPEGFile = UIImageJPEGRepresentation(processedImage, 0.8);
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSError *error2 = nil;
if (![dataForJPEGFile writeToFile:[documentsDirectory stringByAppendingPathComponent:@"FilteredPhoto.jpg"] options:NSAtomicWrite error:&error2])
{
return;
}
}];
The above code captures a full-size photo processed by the same filter chain used in the preview view and saves that photo to disk as a JPEG in the application's documents directory.
Note that the framework currently can't handle images larger than 2048 pixels wide or high on older devices (those before the iPhone 4S, iPad 2, or Retina iPad) due to texture size limitations. This means that the iPhone 4, whose camera outputs still photos larger than this, won't be able to capture photos like this. A tiling mechanism is being implemented to work around this. All other devices should be able to capture and filter photos using this method.
### Processing a still image ###
There are a couple of ways to process a still image and create a result. The first way you can do this is by creating a still image source object and manually creating a filter chain:
UIImage *inputImage = [UIImage imageNamed:@"Lambeau.jpg"];
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:inputImage];
GPUImageSepiaFilter *stillImageFilter = [[GPUImageSepiaFilter alloc] init];
[stillImageSource addTarget:stillImageFilter];
[stillImageFilter useNextFrameForImageCapture];
[stillImageSource processImage];
UIImage *currentFilteredVideoFrame = [stillImageFilter imageFromCurrentFramebuffer];
Note that for a manual capture of an image from a filter, you need to set -useNextFrameForImageCapture in order to tell the filter that you'll be needing to capture from it later. By default, GPUImage reuses framebuffers within filters to conserve memory, so if you need to hold on to a filter's framebuffer for manual image capture, you need to let it know ahead of time.
For single filters that you wish to apply to an image, you can simply do the following:
GPUImageSepiaFilter *stillImageFilter2 = [[GPUImageSepiaFilter alloc] init];
UIImage *quickFilteredImage = [stillImageFilter2 imageByFilteringImage:inputImage];
### Writing a custom filter ###
One significant advantage of this framework over Core Image on iOS (as of iOS 5.0) is the ability to write your own custom image and video processing filters. These filters are supplied as OpenGL ES 2.0 fragment shaders, written in the C-like OpenGL Shading Language.
A custom filter is initialized with code like
GPUImageFilter *customFilter = [[GPUImageFilter alloc] initWithFragmentShaderFromFile:@"CustomShader"];
where the extension used for the fragment shader is .fsh. Additionally, you can use the -initWithFragmentShaderFromString: initializer to provide the fragment shader as a string, if you would not like to ship your fragment shaders in your application bundle.
Fragment shaders perform their calculations for each pixel to be rendered at that filter stage. They do this using the OpenGL Shading Language (GLSL), a C-like language with additions specific to 2-D and 3-D graphics. An example of a fragment shader is the following sepia-tone filter:
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
lowp vec4 outputColor;
outputColor.r = (textureColor.r * 0.393) + (textureColor.g * 0.769) + (textureColor.b * 0.189);
outputColor.g = (textureColor.r * 0.349) + (textureColor.g * 0.686) + (textureColor.b * 0.168);
outputColor.b = (textureColor.r * 0.272) + (textureColor.g * 0.534) + (textureColor.b * 0.131);
outputColor.a = 1.0;
gl_FragColor = outputColor;
}
For an image filter to be usable within the GPUImage framework, the first two lines that take in the textureCoordinate varying (for the current coordinate within the texture, normalized to 1.0) and the inputImageTexture uniform (for the actual input image frame texture) are required.
The remainder of the shader grabs the color of the pixel at this location in the passed-in texture, manipulates it in such a way as to produce a sepia tone, and writes that pixel color out to be used in the next stage of the processing pipeline.
One thing to note when adding fragment shaders to your Xcode project is that Xcode thinks they are source code files. To work around this, you'll need to manually move your shader from the Compile Sources build phase to the Copy Bundle Resources one in order to get the shader to be included in your application bundle.
### Filtering and re-encoding a movie ###
Movies can be loaded into the framework via the GPUImageMovie class, filtered, and then written out using a GPUImageMovieWriter. GPUImageMovieWriter is also fast enough to record video in realtime from an iPhone 4's camera at 640x480, so a direct filtered video source can be fed into it. Currently, GPUImageMovieWriter is fast enough to record live 720p video at up to 20 FPS on the iPhone 4, and both 720p and 1080p video at 30 FPS on the iPhone 4S (as well as on the new iPad).
The following is an example of how you would load a sample movie, pass it through a pixellation filter, then record the result to disk as a 480 x 640 h.264 movie:
movieFile = [[GPUImageMovie alloc] initWithURL:sampleURL];
pixellateFilter = [[GPUImagePixellateFilter alloc] init];
[movieFile addTarget:pixellateFilter];
NSString *pathToMovie = [NSHomeDirectory() stringByAppendingPathComponent:@"Documents/Movie.m4v"];
unlink([pathToMovie UTF8String]);
NSURL *movieURL = [NSURL fileURLWithPath:pathToMovie];
movieWriter = [[GPUImageMovieWriter alloc] initWithMovieURL:movieURL size:CGSizeMake(480.0, 640.0)];
[pixellateFilter addTarget:movieWriter];
movieWriter.shouldPassthroughAudio = YES;
movieFile.audioEncodingTarget = movieWriter;
[movieFile enableSynchronizedEncodingUsingMovieWriter:movieWriter];
[movieWriter startRecording];
[movieFile startProcessing];
Once recording is finished, you need to remove the movie recorder from the filter chain and close off the recording using code like the following:
[pixellateFilter removeTarget:movieWriter];
[movieWriter finishRecording];
A movie won't be usable until it has been finished off, so if this is interrupted before this point, the recording will be lost.
### Interacting with OpenGL ES ###
GPUImage can both export and import textures from OpenGL ES through the use of its GPUImageTextureOutput and GPUImageTextureInput classes, respectively. This lets you record a movie from an OpenGL ES scene that is rendered to a framebuffer object with a bound texture, or filter video or images and then feed them into OpenGL ES as a texture to be displayed in the scene.
The one caution with this approach is that the textures used in these processes must be shared between GPUImage's OpenGL ES context and any other context via a share group or something similar.
## Built-in filters ##
There are currently 125 built-in filters, divided into the following categories:
### Color adjustments ###
- **GPUImageBrightnessFilter**: Adjusts the brightness of the image
- *brightness*: The adjusted brightness (-1.0 - 1.0, with 0.0 as the default)
- **GPUImageExposureFilter**: Adjusts the exposure of the image
- *exposure*: The adjusted exposure (-10.0 - 10.0, with 0.0 as the default)
- **GPUImageContrastFilter**: Adjusts the contrast of the image
- *contrast*: The adjusted contrast (0.0 - 4.0, with 1.0 as the default)
- **GPUImageSaturationFilter**: Adjusts the saturation of an image
- *saturation*: The degree of saturation or desaturation to apply to the image (0.0 - 2.0, with 1.0 as the default)
- **GPUImageGammaFilter**: Adjusts the gamma of an image
- *gamma*: The gamma adjustment to apply (0.0 - 3.0, with 1.0 as the default)
- **GPUImageLevelsFilter**: Photoshop-like levels adjustment. The min, max, minOut and maxOut parameters are floats in the range [0, 1]. If you have parameters from Photoshop in the range [0, 255] you must first convert them to be [0, 1]. The gamma/mid parameter is a float >= 0. This matches the value from Photoshop. If you want to apply levels to RGB as well as individual channels you need to use this filter twice - first for the individual channels and then for all channels.
- **GPUImageColorMatrixFilter**: Transforms the colors of an image by applying a matrix to them
- *colorMatrix*: A 4x4 matrix used to transform each color in an image
- *intensity*: The degree to which the new transformed color replaces the original color for each pixel
- **GPUImageRGBFilter**: Adjusts the individual RGB channels of an image
- *red*: Normalized values by which each color channel is multiplied. The range is from 0.0 up, with 1.0 as the default.
- *green*:
- *blue*:
- **GPUImageHueFilter**: Adjusts the hue of an image
- *hue*: The hue angle, in degrees. 90 degrees by default
- **GPUImageToneCurveFilter**: Adjusts the colors of an image based on spline curves for each color channel.
- *redControlPoints*:
- *greenControlPoints*:
- *blueControlPoints*:
- *rgbCompositeControlPoints*: The tone curve takes in a series of control points that define the spline curve for each color component, or for all three in the composite. These are stored as NSValue-wrapped CGPoints in an NSArray, with normalized X and Y coordinates from 0 - 1. The defaults are (0,0), (0.5,0.5), (1,1).
- **GPUImageHighlightShadowFilter**: Adjusts the shadows and highlights of an image
- *shadows*: Increase to lighten shadows, from 0.0 to 1.0, with 0.0 as the default.
- *highlights*: Decrease to darken highlights, from 0.0 to 1.0, with 1.0 as the default.
- **GPUImageLookupFilter**: Uses an RGB color lookup image to remap the colors in an image. First, use your favourite photo editing application to apply a filter to lookup.png from GPUImage/framework/Resources. For this to work properly each pixel color must not depend on other pixels (e.g. blur will not work). If you need a more complex filter you can create as many lookup tables as required. Once ready, use your new lookup.png file as a second input for GPUImageLookupFilter.
- **GPUImageAmatorkaFilter**: A photo filter based on a Photoshop action by Amatorka: http://amatorka.deviantart.com/art/Amatorka-Action-2-121069631 . If you want to use this effect you have to add lookup_amatorka.png from the GPUImage Resources folder to your application bundle.
- **GPUImageMissEtikateFilter**: A photo filter based on a Photoshop action by Miss Etikate: http://miss-etikate.deviantart.com/art/Photoshop-Action-15-120151961 . If you want to use this effect you have to add lookup_miss_etikate.png from the GPUImage Resources folder to your application bundle.
- **GPUImageSoftEleganceFilter**: Another lookup-based color remapping filter. If you want to use this effect you have to add lookup_soft_elegance_1.png and lookup_soft_elegance_2.png from the GPUImage Resources folder to your application bundle.
- **GPUImageColorInvertFilter**: Inverts the colors of an image
- **GPUImageGrayscaleFilter**: Converts an image to grayscale (a slightly faster implementation of the saturation filter, without the ability to vary the color contribution)
- **GPUImageMonochromeFilter**: Converts the image to a single-color version, based on the luminance of each pixel
- *intensity*: The degree to which the specific color replaces the normal image color (0.0 - 1.0, with 1.0 as the default)
- *color*: The color to use as the basis for the effect, with (0.6, 0.45, 0.3, 1.0) as the default.
- **GPUImageFalseColorFilter**: Uses the luminance of the image to mix between two user-specified colors
- *firstColor*: The first and second colors specify what colors replace the dark and light areas of the image, respectively. The defaults are (0.0, 0.0, 0.5) amd (1.0, 0.0, 0.0).
- *secondColor*:
- **GPUImageHazeFilter**: Used to add or remove haze (similar to a UV filter)
- *distance*: Strength of the color applied. Default 0. Values between -.3 and .3 are best.
- *slope*: Amount of color change. Default 0. Values between -.3 and .3 are best.
- **GPUImageSepiaFilter**: Simple sepia tone filter
- *intensity*: The degree to which the sepia tone replaces the normal image color (0.0 - 1.0, with 1.0 as the default)
- **GPUImageOpacityFilter**: Adjusts the alpha channel of the incoming image
- *opacity*: The value to multiply the incoming alpha channel for each pixel by (0.0 - 1.0, with 1.0 as the default)
- **GPUImageSolidColorGenerator**: This outputs a generated image with a solid color. You need to define the image size using -forceProcessingAtSize:
- *color*: The color, in a four component format, that is used to fill the image.
- **GPUImageLuminanceThresholdFilter**: Pixels with a luminance above the threshold will appear white, and those below will be black
- *threshold*: The luminance threshold, from 0.0 to 1.0, with a default of 0.5
- **GPUImageAdaptiveThresholdFilter**: Determines the local luminance around a pixel, then turns the pixel black if it is below that local luminance and white if above. This can be useful for picking out text under varying lighting conditions.
- *blurRadiusInPixels*: A multiplier for the background averaging blur radius in pixels, with a default of 4.
- **GPUImageAverageLuminanceThresholdFilter**: This applies a thresholding operation where the threshold is continually adjusted based on the average luminance of the scene.
- *thresholdMultiplier*: This is a factor that the average luminance will be multiplied by in order to arrive at the final threshold to use. By default, this is 1.0.
- **GPUImageHistogramFilter**: This analyzes the incoming image and creates an output histogram with the frequency at which each color value occurs. The output of this filter is a 3-pixel-high, 256-pixel-wide image with the center (vertical) pixels containing pixels that correspond to the frequency at which various color values occurred. Each color value occupies one of the 256 width positions, from 0 on the left to 255 on the right. This histogram can be generated for individual color channels (kGPUImageHistogramRed, kGPUImageHistogramGreen, kGPUImageHistogramBlue), the luminance of the image (kGPUImageHistogramLuminance), or for all three color channels at once (kGPUImageHistogramRGB).
- *downsamplingFactor*: Rather than sampling every pixel, this dictates what fraction of the image is sampled. By default, this is 16 with a minimum of 1. This is needed to keep from saturating the histogram, which can only record 256 pixels for each color value before it becomes overloaded.
- **GPUImageHistogramGenerator**: This is a special filter, in that it's primarily intended to work with the GPUImageHistogramFilter. It generates an output representation of the color histograms generated by GPUImageHistogramFilter, but it could be repurposed to display other kinds of values. It takes in an image and looks at the center (vertical) pixels. It then plots the numerical values of the RGB components in separate colored graphs in an output texture. You may need to force a size for this filter in order to make its output visible.
- **GPUImageAverageColor**: This processes an input image and determines the average color of the scene, by averaging the RGBA components for each pixel in the image. A reduction process is used to progressively downsample the source image on the GPU, followed by a short averaging calculation on the CPU. The output from this filter is meaningless, but you need to set the colorAverageProcessingFinishedBlock property to a block that takes in four color components and a frame time and does something with them.
- **GPUImageLuminosity**: Like the GPUImageAverageColor, this reduces an image to its average luminosity. You need to set the luminosityProcessingFinishedBlock to handle the output of this filter, which just returns a luminosity value and a frame time.
- **GPUImageChromaKeyFilter**: For a given color in the image, sets the alpha channel to 0. This is similar to the GPUImageChromaKeyBlendFilter, only instead of blending in a second image for a matching color this doesn't take in a second image and just turns a given color transparent.
- *thresholdSensitivity*: How close a color match needs to exist to the target color to be replaced (default of 0.4)
- *smoothing*: How smoothly to blend for the color match (default of 0.1)
### Image processing ###
- **GPUImageTransformFilter**: This applies an arbitrary 2-D or 3-D transformation to an image
- *affineTransform*: This takes in a CGAffineTransform to adjust an image in 2-D
- *transform3D*: This takes in a CATransform3D to manipulate an image in 3-D
- *ignoreAspectRatio*: By default, the aspect ratio of the transformed image is maintained, but this can be set to YES to make the transformation independent of aspect ratio
- **GPUImageCropFilter**: This crops an image to a specific region, then passes only that region on to the next stage in the filter
- *cropRegion*: A rectangular area to crop out of the image, normalized to coordinates from 0.0 - 1.0. The (0.0, 0.0) position is in the upper left of the image.
- **GPUImageLanczosResamplingFilter**: This lets you up- or downsample an image using Lanczos resampling, which results in noticeably better quality than the standard linear or trilinear interpolation. Simply use -forceProcessingAtSize: to set the target output resolution for the filter, and the image will be resampled for that new size.
- **GPUImageSharpenFilter**: Sharpens the image
- *sharpness*: The sharpness adjustment to apply (-4.0 - 4.0, with 0.0 as the default)
- **GPUImageUnsharpMaskFilter**: Applies an unsharp mask
- *blurRadiusInPixels*: The blur radius of the underlying Gaussian blur. The default is 4.0.
- *intensity*: The strength of the sharpening, from 0.0 on up, with a default of 1.0
- **GPUImageGaussianBlurFilter**: A hardware-optimized, variable-radius Gaussian blur
- *texelSpacingMultiplier*: A multiplier for the spacing between texels, ranging from 0.0 on up, with a default of 1.0. Adjusting this may slightly increase the blur strength, but will introduce artifacts in the result. Highly recommend using other parameters first, before touching this one.
- *blurRadiusInPixels*: A radius in pixels to use for the blur, with a default of 2.0. This adjusts the sigma variable in the Gaussian distribution function.
- *blurRadiusAsFractionOfImageWidth*:
- *blurRadiusAsFractionOfImageHeight*: Setting these properties will allow the blur radius to scale with the size of the image
- *blurPasses*: The number of times to sequentially blur the incoming image. The more passes, the slower the filter.
- **GPUImageBoxBlurFilter**: A hardware-optimized, variable-radius box blur
- *texelSpacingMultiplier*: A multiplier for the spacing between texels, ranging from 0.0 on up, with a default of 1.0. Adjusting this may slightly increase the blur strength, but will introduce artifacts in the result. Highly recommend using other parameters first, before touching this one.
- *blurRadiusInPixels*: A radius in pixels to use for the blur, with a default of 2.0. This adjusts the sigma variable in the Gaussian distribution function.
- *blurRadiusAsFractionOfImageWidth*:
- *blurRadiusAsFractionOfImageHeight*: Setting these properties will allow the blur radius to scale with the size of the image
- *blurPasses*: The number of times to sequentially blur the incoming image. The more passes, the slower the filter.
- **GPUImageSingleComponentGaussianBlurFilter**: A modification of the GPUImageGaussianBlurFilter that operates only on the red component
- *texelSpacingMultiplier*: A multiplier for the spacing between texels, ranging from 0.0 on up, with a default of 1.0. Adjusting this may slightly increase the blur strength, but will introduce artifacts in the result. Highly recommend using other parameters first, before touching this one.
- *blurRadiusInPixels*: A radius in pixels to use for the blur, with a default of 2.0. This adjusts the sigma variable in the Gaussian distribution function.
- *blurRadiusAsFractionOfImageWidth*:
- *blurRadiusAsFractionOfImageHeight*: Setting these properties will allow the blur radius to scale with the size of the image
- *blurPasses*: The number of times to sequentially blur the incoming image. The more passes, the slower the filter.
- **GPUImageGaussianSelectiveBlurFilter**: A Gaussian blur that preserves focus within a circular region
- *blurRadiusInPixels*: A radius in pixels to use for the blur, with a default of 5.0. This adjusts the sigma variable in the Gaussian distribution function.
- *excludeCircleRadius*: The radius of the circular area being excluded from the blur
- *excludeCirclePoint*: The center of the circular area being excluded from the blur
- *excludeBlurSize*: The size of the area between the blurred portion and the clear circle
- *aspectRatio*: The aspect ratio of the image, used to adjust the circularity of the in-focus region. By default, this matches the image aspect ratio, but you can override this value.
- **GPUImageGaussianBlurPositionFilter**: The inverse of the GPUImageGaussianSelectiveBlurFilter, applying the blur only within a certain circle
- *blurSize*: A multiplier for the size of the blur, ranging from 0.0 on up, with a default of 1.0
- *blurCenter*: Center for the blur, defaults to 0.5, 0.5
- *blurRadius*: Radius for the blur, defaults to 1.0
- **GPUImageiOSBlurFilter**: An attempt to replicate the background blur used on iOS 7 in places like the control center.
- *blurRadiusInPixels*: A radius in pixels to use for the blur, with a default of 12.0. This adjusts the sigma variable in the Gaussian distribution function.
- *saturation*: Saturation ranges from 0.0 (fully desaturated) to 2.0 (max saturation), with 0.8 as the normal level
- *downsampling*: The degree to which to downsample, then upsample the incoming image to minimize computations within the Gaussian blur, with a default of 4.0.
- **GPUImageMedianFilter**: Takes the median value of the three color components, over a 3x3 area
- **GPUImageBilateralFilter**: A bilateral blur, which tries to blur similar color values while preserving sharp edges
- *texelSpacingMultiplier*: A multiplier for the spacing between texel reads, ranging from 0.0 on up, with a default of 4.0
- *distanceNormalizationFactor*: A normalization factor for the distance between central color and sample color, with a default of 8.0.
- **GPUImageTiltShiftFilter**: A simulated tilt shift lens effect
- *blurRadiusInPixels*: The radius of the underlying blur, in pixels. This is 7.0 by default.
- *topFocusLevel*: The normalized location of the top of the in-focus area in the image, this value should be lower than bottomFocusLevel, default 0.4
- *bottomFocusLevel*: The normalized location of the bottom of the in-focus area in the image, this value should be higher than topFocusLevel, default 0.6
- *focusFallOffRate*: The rate at which the image gets blurry away from the in-focus region, default 0.2
- **GPUImage3x3ConvolutionFilter**: Runs a 3x3 convolution kernel against the image
- *convolutionKernel*: The convolution kernel is a 3x3 matrix of values to apply to the pixel and its 8 surrounding pixels. The matrix is specified in row-major order, with the top left pixel being one.one and the bottom right three.three. If the values in the matrix don't add up to 1.0, the image could be brightened or darkened.
- **GPUImageSobelEdgeDetectionFilter**: Sobel edge detection, with edges highlighted in white
- *texelWidth*:
- *texelHeight*: These parameters affect the visibility of the detected edges
- *edgeStrength*: Adjusts the dynamic range of the filter. Higher values lead to stronger edges, but can saturate the intensity colorspace. Default is 1.0.
- **GPUImagePrewittEdgeDetectionFilter**: Prewitt edge detection, with edges highlighted in white
- *texelWidth*:
- *texelHeight*: These parameters affect the visibility of the detected edges
- *edgeStrength*: Adjusts the dynamic range of the filter. Higher values lead to stronger edges, but can saturate the intensity colorspace. Default is 1.0.
- **GPUImageThresholdEdgeDetectionFilter**: Performs Sobel edge detection, but applies a threshold instead of giving gradual strength values
- *texelWidth*:
- *texelHeight*: These parameters affect the visibility of the detected edges
- *edgeStrength*: Adjusts the dynamic range of the filter. Higher values lead to stronger edges, but can saturate the intensity colorspace. Default is 1.0.
- *threshold*: Any edge above this threshold will be black, and anything below white. Ranges from 0.0 to 1.0, with 0.8 as the default
- **GPUImageCannyEdgeDetectionFilter**: This uses the full Canny process to highlight one-pixel-wide edges
- *texelWidth*:
- *texelHeight*: These parameters affect the visibility of the detected edges
- *blurRadiusInPixels*: The underlying blur radius for the Gaussian blur. Default is 2.0.
- *blurTexelSpacingMultiplier*: The underlying blur texel spacing multiplier. Default is 1.0.
- *upperThreshold*: Any edge with a gradient magnitude above this threshold will pass and show up in the final result. Default is 0.4.
- *lowerThreshold*: Any edge with a gradient magnitude below this threshold will fail and be removed from the final result. Default is 0.1.
- **GPUImageHarrisCornerDetectionFilter**: Runs the Harris corner detection algorithm on an input image, and produces an image with those corner points as white pixels and everything else black. The cornersDetectedBlock can be set, and you will be provided with a list of corners (in normalized 0..1 X, Y coordinates) within that callback for whatever additional operations you want to perform.
- *blurRadiusInPixels*: The radius of the underlying Gaussian blur. The default is 2.0.
- *sensitivity*: An internal scaling factor applied to adjust the dynamic range of the cornerness maps generated in the filter. The default is 5.0.
- *threshold*: The threshold at which a point is detected as a corner. This can vary significantly based on the size, lighting conditions, and iOS device camera type, so it might take a little experimentation to get right for your cases. Default is 0.20.
- **GPUImageNobleCornerDetectionFilter**: Runs the Noble variant on the Harris corner detector. It behaves as described above for the Harris detector.
- *blurRadiusInPixels*: The radius of the underlying Gaussian blur. The default is 2.0.
- *sensitivity*: An internal scaling factor applied to adjust the dynamic range of the cornerness maps generated in the filter. The default is 5.0.
- *threshold*: The threshold at which a point is detected as a corner. This can vary significantly based on the size, lighting conditions, and iOS device camera type, so it might take a little experimentation to get right for your cases. Default is 0.2.
- **GPUImageShiTomasiCornerDetectionFilter**: Runs the Shi-Tomasi feature detector. It behaves as described above for the Harris detector.
- *blurRadiusInPixels*: The radius of the underlying Gaussian blur. The default is 2.0.
- *sensitivity*: An internal scaling factor applied to adjust the dynamic range of the cornerness maps generated in the filter. The default is 1.5.
- *threshold*: The threshold at which a point is detected as a corner. This can vary significantly based on the size, lighting conditions, and iOS device camera type, so it might take a little experimentation to get right for your cases. Default is 0.2.
- **GPUImageNonMaximumSuppressionFilter**: Currently used only as part of the Harris corner detection filter, this will sample a 1-pixel box around each pixel and determine if the center pixel's red channel is the maximum in that area. If it is, it stays. If not, it is set to 0 for all color components.
- **GPUImageXYDerivativeFilter**: An internal component within the Harris corner detection filter, this calculates the squared difference between the pixels to the left and right of this one, the squared difference of the pixels above and below this one, and the product of those two differences.
- **GPUImageCrosshairGenerator**: This draws a series of crosshairs on an image, most often used for identifying machine vision features. It does not take in a standard image like other filters, but a series of points in its -renderCrosshairsFromArray:count: method, which does the actual drawing. You will need to force this filter to render at the particular output size you need.
- *crosshairWidth*: The width, in pixels, of the crosshairs to be drawn onscreen.
- **GPUImageDilationFilter**: This performs an image dilation operation, where the maximum intensity of the red channel in a rectangular neighborhood is used for the intensity of this pixel. The radius of the rectangular area to sample over is specified on initialization, with a range of 1-4 pixels. This is intended for use with grayscale images, and it expands bright regions.
- **GPUImageRGBDilationFilter**: This is the same as the GPUImageDilationFilter, except that this acts on all color channels, not just the red channel.
- **GPUImageErosionFilter**: This performs an image erosion operation, where the minimum intensity of the red channel in a rectangular neighborhood is used for the intensity of this pixel. The radius of the rectangular area to sample over is specified on initialization, with a range of 1-4 pixels. This is intended for use with grayscale images, and it expands dark regions.
- **GPUImageRGBErosionFilter**: This is the same as the GPUImageErosionFilter, except that this acts on all color channels, not just the red channel.
- **GPUImageOpeningFilter**: This performs an erosion on the red channel of an image, followed by a dilation of the same radius. The radius is set on initialization, with a range of 1-4 pixels. This filters out smaller bright regions.
- **GPUImageRGBOpeningFilter**: This is the same as the GPUImageOpeningFilter, except that this acts on all color channels, not just the red channel.
- **GPUImageClosingFilter**: This performs a dilation on the red channel of an image, followed by an erosion of the same radius. The radius is set on initialization, with a range of 1-4 pixels. This filters out smaller dark regions.
- **GPUImageRGBClosingFilter**: This is the same as the GPUImageClosingFilter, except that this acts on all color channels, not just the red channel.
- **GPUImageLocalBinaryPatternFilter**: This performs a comparison of intensity of the red channel of the 8 surrounding pixels and that of the central one, encoding the comparison results in a bit string that becomes this pixel intensity. The least-significant bit is the top-right comparison, going counterclockwise to end at the right comparison as the most significant bit.
- **GPUImageLowPassFilter**: This applies a low pass filter to incoming video frames. This basically accumulates a weighted rolling average of previous frames with the current ones as they come in. This can be used to denoise video, add motion blur, or be used to create a high pass filter.
- *filterStrength*: This controls the degree by which the previous accumulated frames are blended with the current one. This ranges from 0.0 to 1.0, with a default of 0.5.
- **GPUImageHighPassFilter**: This applies a high pass filter to incoming video frames. This is the inverse of the low pass filter, showing the difference between the current frame and the weighted rolling average of previous ones. This is most useful for motion detection.
- *filterStrength*: This controls the degree by which the previous accumulated frames are blended and then subtracted from the current one. This ranges from 0.0 to 1.0, with a default of 0.5.
- **GPUImageMotionDetector**: This is a motion detector based on a high-pass filter. You set the motionDetectionBlock and on every incoming frame it will give you the centroid of any detected movement in the scene (in normalized X,Y coordinates) as well as an intensity of motion for the scene.
- *lowPassFilterStrength*: This controls the strength of the low pass filter used behind the scenes to establish the baseline that incoming frames are compared with. This ranges from 0.0 to 1.0, with a default of 0.5.
- **GPUImageHoughTransformLineDetector**: Detects lines in the image using a Hough transform into parallel coordinate space. This approach is based entirely on the PC lines process developed by the Graph@FIT research group at the Brno University of Technology and described in their publications: M. Dubská, J. Havel, and A. Herout. Real-Time Detection of Lines using Parallel Coordinates and OpenGL. Proceedings of SCCG 2011, Bratislava, SK, p. 7 (http://medusa.fit.vutbr.cz/public/data/papers/2011-SCCG-Dubska-Real-Time-Line-Detection-Using-PC-and-OpenGL.pdf) and M. Dubská, J. Havel, and A. Herout. PClines — Line detection using parallel coordinates. 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), p. 1489- 1494 (http://medusa.fit.vutbr.cz/public/data/papers/2011-CVPR-Dubska-PClines.pdf).
- *edgeThreshold*: A threshold value for which a point is detected as belonging to an edge for determining lines. Default is 0.9.
- *lineDetectionThreshold*: A threshold value for which a local maximum is detected as belonging to a line in parallel coordinate space. Default is 0.20.
- *linesDetectedBlock*: This block is called on the detection of lines, usually on every processed frame. A C array containing normalized slopes and intercepts in m, b pairs (y=mx+b) is passed in, along with a count of the number of lines detected and the current timestamp of the video frame.
- **GPUImageLineGenerator**: A helper class that generates lines which can overlay the scene. The color of these lines can be adjusted using -setLineColorRed:green:blue:
- *lineWidth*: The width of the lines, in pixels, with a default of 1.0.
- **GPUImageMotionBlurFilter**: Applies a directional motion blur to an image
- *blurSize*: A multiplier for the blur size, ranging from 0.0 on up, with a default of 1.0
- *blurAngle*: The angular direction of the blur, in degrees. 0 degrees by default.
- **GPUImageZoomBlurFilter**: Applies a directional motion blur to an image
- *blurSize*: A multiplier for the blur size, ranging from 0.0 on up, with a default of 1.0
- *blurCenter*: The normalized center of the blur. (0.5, 0.5) by default
### Blending modes ###
- **GPUImageChromaKeyBlendFilter**: Selectively replaces a color in the first image with the second image
- *thresholdSensitivity*: How close a color match needs to exist to the target color to be replaced (default of 0.4)
- *smoothing*: How smoothly to blend for the color match (default of 0.1)
- **GPUImageDissolveBlendFilter**: Applies a dissolve blend of two images
- *mix*: The degree with which the second image overrides the first (0.0 - 1.0, with 0.5 as the default)
- **GPUImageMultiplyBlendFilter**: Applies a multiply blend of two images
- **GPUImageAddBlendFilter**: Applies an additive blend of two images
- **GPUImageSubtractBlendFilter**: Applies a subtractive blend of two images
- **GPUImageDivideBlendFilter**: Applies a division blend of two images
- **GPUImageOverlayBlendFilter**: Applies an overlay blend of two images
- **GPUImageDarkenBlendFilter**: Blends two images by taking the minimum value of each color component between the images
- **GPUImageLightenBlendFilter**: Blends two images by taking the maximum value of each color component between the images
- **GPUImageColorBurnBlendFilter**: Applies a color burn blend of two images
- **GPUImageColorDodgeBlendFilter**: Applies a color dodge blend of two images
- **GPUImageScreenBlendFilter**: Applies a screen blend of two images
- **GPUImageExclusionBlendFilter**: Applies an exclusion blend of two images
- **GPUImageDifferenceBlendFilter**: Applies a difference blend of two images
- **GPUImageHardLightBlendFilter**: Applies a hard light blend of two images
- **GPUImageSoftLightBlendFilter**: Applies a soft light blend of two images
- **GPUImageAlphaBlendFilter**: Blends the second image over the first, based on the second's alpha channel
- *mix*: The degree with which the second image overrides the first (0.0 - 1.0, with 1.0 as the default)
- **GPUImageSourceOverBlendFilter**: Applies a source over blend of two images
- **GPUImageColorBurnBlendFilter**: Applies a color burn blend of two images
- **GPUImageColorDodgeBlendFilter**: Applies a color dodge blend of two images
- **GPUImageNormalBlendFilter**: Applies a normal blend of two images
- **GPUImageColorBlendFilter**: Applies a color blend of two images
- **GPUImageHueBlendFilter**: Applies a hue blend of two images
- **GPUImageSaturationBlendFilter**: Applies a saturation blend of two images
- **GPUImageLuminosityBlendFilter**: Applies a luminosity blend of two images
- **GPUImageLinearBurnBlendFilter**: Applies a linear burn blend of two images
- **GPUImagePoissonBlendFilter**: Applies a Poisson blend of two images
- *mix*: Mix ranges from 0.0 (only image 1) to 1.0 (only image 2 gradients), with 1.0 as the normal level
- *numIterations*: The number of times to propagate the gradients. Crank this up to 100 or even 1000 if you want to get anywhere near convergence. Yes, this will be slow.
- **GPUImageMaskFilter**: Masks one image using another
### Visual effects ###
- **GPUImagePixellateFilter**: Applies a pixellation effect on an image or video
- *fractionalWidthOfAPixel*: How large the pixels are, as a fraction of the width and height of the image (0.0 - 1.0, default 0.05)
- **GPUImagePolarPixellateFilter**: Applies a pixellation effect on an image or video, based on polar coordinates instead of Cartesian ones
- *center*: The center about which to apply the pixellation, defaulting to (0.5, 0.5)
- *pixelSize*: The fractional pixel size, split into width and height components. The default is (0.05, 0.05)
- **GPUImagePolkaDotFilter**: Breaks an image up into colored dots within a regular grid
- *fractionalWidthOfAPixel*: How large the dots are, as a fraction of the width and height of the image (0.0 - 1.0, default 0.05)
- *dotScaling*: What fraction of each grid space is taken up by a dot, from 0.0 to 1.0 with a default of 0.9.
- **GPUImageHalftoneFilter**: Applies a halftone effect to an image, like news print
- *fractionalWidthOfAPixel*: How large the halftone dots are, as a fraction of the width and height of the image (0.0 - 1.0, default 0.05)
- **GPUImageCrosshatchFilter**: This converts an image into a black-and-white crosshatch pattern
- *crossHatchSpacing*: The fractional width of the image to use as the spacing for the crosshatch. The default is 0.03.
- *lineWidth*: A relative width for the crosshatch lines. The default is 0.003.
- **GPUImageSketchFilter**: Converts video to look like a sketch. This is just the Sobel edge detection filter with the colors inverted
- *texelWidth*:
- *texelHeight*: These parameters affect the visibility of the detected edges
- *edgeStrength*: Adjusts the dynamic range of the filter. Higher values lead to stronger edges, but can saturate the intensity colorspace. Default is 1.0.
- **GPUImageThresholdSketchFilter**: Same as the sketch filter, only the edges are thresholded instead of being grayscale
- *texelWidth*:
- *texelHeight*: These parameters affect the visibility of the detected edges
- *edgeStrength*: Adjusts the dynamic range of the filter. Higher values lead to stronger edges, but can saturate the intensity colorspace. Default is 1.0.
- *threshold*: Any edge above this threshold will be black, and anything below white. Ranges from 0.0 to 1.0, with 0.8 as the default
- **GPUImageToonFilter**: This uses Sobel edge detection to place a black border around objects, and then it quantizes the colors present in the image to give a cartoon-like quality to the image.
- *texelWidth*:
- *texelHeight*: These parameters affect the visibility of the detected edges
- *threshold*: The sensitivity of the edge detection, with lower values being more sensitive. Ranges from 0.0 to 1.0, with 0.2 as the default
- *quantizationLevels*: The number of color levels to represent in the final image. Default is 10.0
- **GPUImageSmoothToonFilter**: This uses a similar process as the GPUImageToonFilter, only it precedes the toon effect with a Gaussian blur to smooth out noise.
- *texelWidth*:
- *texelHeight*: These parameters affect the visibility of the detected edges
- *blurRadiusInPixels*: The radius of the underlying Gaussian blur. The default is 2.0.
- *threshold*: The sensitivity of the edge detection, with lower values being more sensitive. Ranges from 0.0 to 1.0, with 0.2 as the default
- *quantizationLevels*: The number of color levels to represent in the final image. Default is 10.0
- **GPUImageEmbossFilter**: Applies an embossing effect on the image
- *intensity*: The strength of the embossing, from 0.0 to 4.0, with 1.0 as the normal level
- **GPUImagePosterizeFilter**: This reduces the color dynamic range into the number of steps specified, leading to a cartoon-like simple shading of the image.
- *colorLevels*: The number of color levels to reduce the image space to. This ranges from 1 to 256, with a default of 10.
- **GPUImageSwirlFilter**: Creates a swirl distortion on the image
- *radius*: The radius from the center to apply the distortion, with a default of 0.5
- *center*: The center of the image (in normalized coordinates from 0 - 1.0) about which to twist, with a default of (0.5, 0.5)
- *angle*: The amount of twist to apply to the image, with a default of 1.0
- **GPUImageBulgeDistortionFilter**: Creates a bulge distortion on the image
- *radius*: The radius from the center to apply the distortion, with a default of 0.25
- *center*: The center of the image (in normalized coordinates from 0 - 1.0) about which to distort, with a default of (0.5, 0.5)
- *scale*: The amount of distortion to apply, from -1.0 to 1.0, with a default of 0.5
- **GPUImagePinchDistortionFilter**: Creates a pinch distortion of the image
- *radius*: The radius from the center to apply the distortion, with a default of 1.0
- *center*: The center of the image (in normalized coordinates from 0 - 1.0) about which to distort, with a default of (0.5, 0.5)
- *scale*: The amount of distortion to apply, from -2.0 to 2.0, with a default of 1.0
- **GPUImageStretchDistortionFilter**: Creates a stretch distortion of the image
- *center*: The center of the image (in normalized coordinates from 0 - 1.0) about which to distort, with a default of (0.5, 0.5)
- **GPUImageSphereRefractionFilter**: Simulates the refraction through a glass sphere
- *center*: The center about which to apply the distortion, with a default of (0.5, 0.5)
- *radius*: The radius of the distortion, ranging from 0.0 to 1.0, with a default of 0.25
- *refractiveIndex*: The index of refraction for the sphere, with a default of 0.71
- **GPUImageGlassSphereFilter**: Same as the GPUImageSphereRefractionFilter, only the image is not inverted and there's a little bit of frosting at the edges of the glass
- *center*: The center about which to apply the distortion, with a default of (0.5, 0.5)
- *radius*: The radius of the distortion, ranging from 0.0 to 1.0, with a default of 0.25
- *refractiveIndex*: The index of refraction for the sphere, with a default of 0.71
- **GPUImageVignetteFilter**: Performs a vignetting effect, fading out the image at the edges
- *x*:
- *y*: The directional intensity of the vignetting, with a default of x = 0.75, y = 0.5
- **GPUImageKuwaharaFilter**: Kuwahara image abstraction, drawn from the work of Kyprianidis, et. al. in their publication "Anisotropic Kuwahara Filtering on the GPU" within the GPU Pro collection. This produces an oil-painting-like image, but it is extremely computationally expensive, so it can take seconds to render a frame on an iPad 2. This might be best used for still images.
- *radius*: In integer specifying the number of pixels out from the center pixel to test when applying the filter, with a default of 4. A higher value creates a more abstracted image, but at the cost of much greater processing time.
- **GPUImageKuwaharaRadius3Filter**: A modified version of the Kuwahara filter, optimized to work over just a radius of three pixels
- **GPUImagePerlinNoiseFilter**: Generates an image full of Perlin noise
- *colorStart*:
- *colorFinish*: The color range for the noise being generated
- *scale*: The scaling of the noise being generated
- **GPUImageCGAColorspaceFilter**: Simulates the colorspace of a CGA monitor
- **GPUImageMosaicFilter**: This filter takes an input tileset, the tiles must ascend in luminance. It looks at the input image and replaces each display tile with an input tile according to the luminance of that tile. The idea was to replicate the ASCII video filters seen in other apps, but the tileset can be anything.
- *inputTileSize*:
- *numTiles*:
- *displayTileSize*:
- *colorOn*:
- **GPUImageJFAVoronoiFilter**: Generates a Voronoi map, for use in a later stage.
- *sizeInPixels*: Size of the individual elements
- **GPUImageVoronoiConsumerFilter**: Takes in the Voronoi map, and uses that to filter an incoming image.
- *sizeInPixels*: Size of the individual elements
You can also easily write your own custom filters using the C-like OpenGL Shading Language, as described above.
## Sample applications ##
Several sample applications are bundled with the framework source. Most are compatible with both iPhone and iPad-class devices. They attempt to show off various aspects of the framework and should be used as the best examples of the API while the framework is under development. These include:
### SimpleImageFilter ###
A bundled JPEG image is loaded into the application at launch, a filter is applied to it, and the result rendered to the screen. Additionally, this sample shows two ways of taking in an image, filtering it, and saving it to disk.
### SimpleVideoFilter ###
A pixellate filter is applied to a live video stream, with a UISlider control that lets you adjust the pixel size on the live video.
### SimpleVideoFileFilter ###
A movie file is loaded from disk, an unsharp mask filter is applied to it, and the filtered result is re-encoded as another movie.
### MultiViewFilterExample ###
From a single camera feed, four views are populated with realtime filters applied to camera. One is just the straight camera video, one is a preprogrammed sepia tone, and two are custom filters based on shader programs.
### FilterShowcase ###
This demonstrates every filter supplied with GPUImage.
### BenchmarkSuite ###
This is used to test the performance of the overall framework by testing it against CPU-bound routines and Core Image. Benchmarks involving still images and video are run against all three, with results displayed in-application.
### CubeExample ###
This demonstrates the ability of GPUImage to interact with OpenGL ES rendering. Frames are captured from the camera, a sepia filter applied to them, and then they are fed into a texture to be applied to the face of a cube you can rotate with your finger. This cube in turn is rendered to a texture-backed framebuffer object, and that texture is fed back into GPUImage to have a pixellation filter applied to it before rendering to screen.
In other words, the path of this application is camera -> sepia tone filter -> cube -> pixellation filter -> display.
### ColorObjectTracking ###
A version of my ColorTracking example from http://www.sunsetlakesoftware.com/2010/10/22/gpu-accelerated-video-processing-mac-and-ios ported across to use GPUImage, this application uses color in a scene to track objects from a live camera feed. The four views you can switch between include the raw camera feed, the camera feed with pixels matching the color threshold in white, the processed video where positions are encoded as colors within the pixels passing the threshold test, and finally the live video feed with a dot that tracks the selected color. Tapping the screen changes the color to track to match the color of the pixels under your finger. Tapping and dragging on the screen makes the color threshold more or less forgiving. This is most obvious on the second, color thresholding view.
Currently, all processing for the color averaging in the last step is done on the CPU, so this is part is extremely slow.

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 198 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 126 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.1 KiB

View File

@ -0,0 +1,42 @@
// This is Jeff LaMarche's GLProgram OpenGL shader wrapper class from his OpenGL ES 2.0 book.
// A description of this can be found at his page on the topic:
// http://iphonedevelopment.blogspot.com/2010/11/opengl-es-20-for-ios-chapter-4.html
// I've extended this to be able to take programs as NSStrings in addition to files, for baked-in shaders
#import <Foundation/Foundation.h>
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
#import <OpenGLES/ES2/gl.h>
#import <OpenGLES/ES2/glext.h>
#else
#import <OpenGL/OpenGL.h>
#import <OpenGL/gl.h>
#endif
@interface GLProgram : NSObject
{
NSMutableArray *attributes;
NSMutableArray *uniforms;
GLuint program,
vertShader,
fragShader;
}
@property(readwrite, nonatomic) BOOL initialized;
@property(readwrite, copy, nonatomic) NSString *vertexShaderLog;
@property(readwrite, copy, nonatomic) NSString *fragmentShaderLog;
@property(readwrite, copy, nonatomic) NSString *programLog;
- (id)initWithVertexShaderString:(NSString *)vShaderString
fragmentShaderString:(NSString *)fShaderString;
- (id)initWithVertexShaderString:(NSString *)vShaderString
fragmentShaderFilename:(NSString *)fShaderFilename;
- (id)initWithVertexShaderFilename:(NSString *)vShaderFilename
fragmentShaderFilename:(NSString *)fShaderFilename;
- (void)addAttribute:(NSString *)attributeName;
- (GLuint)attributeIndex:(NSString *)attributeName;
- (GLuint)uniformIndex:(NSString *)uniformName;
- (BOOL)link;
- (void)use;
- (void)validate;
@end

View File

@ -0,0 +1,236 @@
// This is Jeff LaMarche's GLProgram OpenGL shader wrapper class from his OpenGL ES 2.0 book.
// A description of this can be found at his page on the topic:
// http://iphonedevelopment.blogspot.com/2010/11/opengl-es-20-for-ios-chapter-4.html
#import "GLProgram.h"
// START:typedefs
#pragma mark Function Pointer Definitions
typedef void (*GLInfoFunction)(GLuint program, GLenum pname, GLint* params);
typedef void (*GLLogFunction) (GLuint program, GLsizei bufsize, GLsizei* length, GLchar* infolog);
// END:typedefs
#pragma mark -
#pragma mark Private Extension Method Declaration
// START:extension
@interface GLProgram()
- (BOOL)compileShader:(GLuint *)shader
type:(GLenum)type
string:(NSString *)shaderString;
@end
// END:extension
#pragma mark -
@implementation GLProgram
// START:init
@synthesize initialized = _initialized;
- (id)initWithVertexShaderString:(NSString *)vShaderString
fragmentShaderString:(NSString *)fShaderString;
{
if ((self = [super init]))
{
_initialized = NO;
attributes = [[NSMutableArray alloc] init];
uniforms = [[NSMutableArray alloc] init];
program = glCreateProgram();
if (![self compileShader:&vertShader
type:GL_VERTEX_SHADER
string:vShaderString])
{
NSLog(@"Failed to compile vertex shader");
}
// Create and compile fragment shader
if (![self compileShader:&fragShader
type:GL_FRAGMENT_SHADER
string:fShaderString])
{
NSLog(@"Failed to compile fragment shader");
}
glAttachShader(program, vertShader);
glAttachShader(program, fragShader);
}
return self;
}
- (id)initWithVertexShaderString:(NSString *)vShaderString
fragmentShaderFilename:(NSString *)fShaderFilename;
{
NSString *fragShaderPathname = [[NSBundle mainBundle] pathForResource:fShaderFilename ofType:@"fsh"];
NSString *fragmentShaderString = [NSString stringWithContentsOfFile:fragShaderPathname encoding:NSUTF8StringEncoding error:nil];
if ((self = [self initWithVertexShaderString:vShaderString fragmentShaderString:fragmentShaderString]))
{
}
return self;
}
- (id)initWithVertexShaderFilename:(NSString *)vShaderFilename
fragmentShaderFilename:(NSString *)fShaderFilename;
{
NSString *vertShaderPathname = [[NSBundle mainBundle] pathForResource:vShaderFilename ofType:@"vsh"];
NSString *vertexShaderString = [NSString stringWithContentsOfFile:vertShaderPathname encoding:NSUTF8StringEncoding error:nil];
NSString *fragShaderPathname = [[NSBundle mainBundle] pathForResource:fShaderFilename ofType:@"fsh"];
NSString *fragmentShaderString = [NSString stringWithContentsOfFile:fragShaderPathname encoding:NSUTF8StringEncoding error:nil];
if ((self = [self initWithVertexShaderString:vertexShaderString fragmentShaderString:fragmentShaderString]))
{
}
return self;
}
// END:init
// START:compile
- (BOOL)compileShader:(GLuint *)shader
type:(GLenum)type
string:(NSString *)shaderString
{
// CFAbsoluteTime startTime = CFAbsoluteTimeGetCurrent();
GLint status;
const GLchar *source;
source =
(GLchar *)[shaderString UTF8String];
if (!source)
{
NSLog(@"Failed to load vertex shader");
return NO;
}
*shader = glCreateShader(type);
glShaderSource(*shader, 1, &source, NULL);
glCompileShader(*shader);
glGetShaderiv(*shader, GL_COMPILE_STATUS, &status);
if (status != GL_TRUE)
{
GLint logLength;
glGetShaderiv(*shader, GL_INFO_LOG_LENGTH, &logLength);
if (logLength > 0)
{
GLchar *log = (GLchar *)malloc(logLength);
glGetShaderInfoLog(*shader, logLength, &logLength, log);
if (shader == &vertShader)
{
self.vertexShaderLog = [NSString stringWithFormat:@"%s", log];
}
else
{
self.fragmentShaderLog = [NSString stringWithFormat:@"%s", log];
}
free(log);
}
}
// CFAbsoluteTime linkTime = (CFAbsoluteTimeGetCurrent() - startTime);
// NSLog(@"Compiled in %f ms", linkTime * 1000.0);
return status == GL_TRUE;
}
// END:compile
#pragma mark -
// START:addattribute
- (void)addAttribute:(NSString *)attributeName
{
if (![attributes containsObject:attributeName])
{
[attributes addObject:attributeName];
glBindAttribLocation(program,
(GLuint)[attributes indexOfObject:attributeName],
[attributeName UTF8String]);
}
}
// END:addattribute
// START:indexmethods
- (GLuint)attributeIndex:(NSString *)attributeName
{
return (GLuint)[attributes indexOfObject:attributeName];
}
- (GLuint)uniformIndex:(NSString *)uniformName
{
return glGetUniformLocation(program, [uniformName UTF8String]);
}
// END:indexmethods
#pragma mark -
// START:link
- (BOOL)link
{
// CFAbsoluteTime startTime = CFAbsoluteTimeGetCurrent();
GLint status;
glLinkProgram(program);
glGetProgramiv(program, GL_LINK_STATUS, &status);
if (status == GL_FALSE)
return NO;
if (vertShader)
{
glDeleteShader(vertShader);
vertShader = 0;
}
if (fragShader)
{
glDeleteShader(fragShader);
fragShader = 0;
}
self.initialized = YES;
// CFAbsoluteTime linkTime = (CFAbsoluteTimeGetCurrent() - startTime);
// NSLog(@"Linked in %f ms", linkTime * 1000.0);
return YES;
}
// END:link
// START:use
- (void)use
{
glUseProgram(program);
}
// END:use
#pragma mark -
- (void)validate;
{
GLint logLength;
glValidateProgram(program);
glGetProgramiv(program, GL_INFO_LOG_LENGTH, &logLength);
if (logLength > 0)
{
GLchar *log = (GLchar *)malloc(logLength);
glGetProgramInfoLog(program, logLength, &logLength, log);
self.programLog = [NSString stringWithFormat:@"%s", log];
free(log);
}
}
#pragma mark -
// START:dealloc
- (void)dealloc
{
if (vertShader)
glDeleteShader(vertShader);
if (fragShader)
glDeleteShader(fragShader);
if (program)
glDeleteProgram(program);
}
// END:dealloc
@end

View File

@ -0,0 +1,164 @@
#import "GLProgram.h"
// Base classes
#import "GPUImageContext.h"
#import "GPUImageOutput.h"
#import "GPUImageView.h"
#import "GPUImageVideoCamera.h"
#import "GPUImageStillCamera.h"
#import "GPUImageMovie.h"
#import "GPUImagePicture.h"
#import "GPUImageRawDataInput.h"
#import "GPUImageRawDataOutput.h"
#import "GPUImageMovieWriter.h"
#import "GPUImageFilterPipeline.h"
#import "GPUImageTextureOutput.h"
#import "GPUImageFilterGroup.h"
#import "GPUImageTextureInput.h"
#import "GPUImageUIElement.h"
#import "GPUImageBuffer.h"
#import "GPUImageFramebuffer.h"
#import "GPUImageFramebufferCache.h"
// Filters
#import "GPUImageFilter.h"
#import "GPUImageTwoInputFilter.h"
#import "GPUImagePixellateFilter.h"
#import "GPUImagePixellatePositionFilter.h"
#import "GPUImageSepiaFilter.h"
#import "GPUImageColorInvertFilter.h"
#import "GPUImageSaturationFilter.h"
#import "GPUImageContrastFilter.h"
#import "GPUImageExposureFilter.h"
#import "GPUImageBrightnessFilter.h"
#import "GPUImageLevelsFilter.h"
#import "GPUImageSharpenFilter.h"
#import "GPUImageGammaFilter.h"
#import "GPUImageSobelEdgeDetectionFilter.h"
#import "GPUImageSketchFilter.h"
#import "GPUImageToonFilter.h"
#import "GPUImageSmoothToonFilter.h"
#import "GPUImageMultiplyBlendFilter.h"
#import "GPUImageDissolveBlendFilter.h"
#import "GPUImageKuwaharaFilter.h"
#import "GPUImageKuwaharaRadius3Filter.h"
#import "GPUImageVignetteFilter.h"
#import "GPUImageGaussianBlurFilter.h"
#import "GPUImageGaussianBlurPositionFilter.h"
#import "GPUImageGaussianSelectiveBlurFilter.h"
#import "GPUImageOverlayBlendFilter.h"
#import "GPUImageDarkenBlendFilter.h"
#import "GPUImageLightenBlendFilter.h"
#import "GPUImageSwirlFilter.h"
#import "GPUImageSourceOverBlendFilter.h"
#import "GPUImageColorBurnBlendFilter.h"
#import "GPUImageColorDodgeBlendFilter.h"
#import "GPUImageScreenBlendFilter.h"
#import "GPUImageExclusionBlendFilter.h"
#import "GPUImageDifferenceBlendFilter.h"
#import "GPUImageSubtractBlendFilter.h"
#import "GPUImageHardLightBlendFilter.h"
#import "GPUImageSoftLightBlendFilter.h"
#import "GPUImageColorBlendFilter.h"
#import "GPUImageHueBlendFilter.h"
#import "GPUImageSaturationBlendFilter.h"
#import "GPUImageLuminosityBlendFilter.h"
#import "GPUImageCropFilter.h"
#import "GPUImageGrayscaleFilter.h"
#import "GPUImageTransformFilter.h"
#import "GPUImageChromaKeyBlendFilter.h"
#import "GPUImageHazeFilter.h"
#import "GPUImageLuminanceThresholdFilter.h"
#import "GPUImagePosterizeFilter.h"
#import "GPUImageBoxBlurFilter.h"
#import "GPUImageAdaptiveThresholdFilter.h"
#import "GPUImageUnsharpMaskFilter.h"
#import "GPUImageBulgeDistortionFilter.h"
#import "GPUImagePinchDistortionFilter.h"
#import "GPUImageCrosshatchFilter.h"
#import "GPUImageCGAColorspaceFilter.h"
#import "GPUImagePolarPixellateFilter.h"
#import "GPUImageStretchDistortionFilter.h"
#import "GPUImagePerlinNoiseFilter.h"
#import "GPUImageJFAVoronoiFilter.h"
#import "GPUImageVoronoiConsumerFilter.h"
#import "GPUImageMosaicFilter.h"
#import "GPUImageTiltShiftFilter.h"
#import "GPUImage3x3ConvolutionFilter.h"
#import "GPUImageEmbossFilter.h"
#import "GPUImageCannyEdgeDetectionFilter.h"
#import "GPUImageThresholdEdgeDetectionFilter.h"
#import "GPUImageMaskFilter.h"
#import "GPUImageHistogramFilter.h"
#import "GPUImageHistogramGenerator.h"
#import "GPUImageHistogramEqualizationFilter.h"
#import "GPUImagePrewittEdgeDetectionFilter.h"
#import "GPUImageXYDerivativeFilter.h"
#import "GPUImageHarrisCornerDetectionFilter.h"
#import "GPUImageAlphaBlendFilter.h"
#import "GPUImageNormalBlendFilter.h"
#import "GPUImageNonMaximumSuppressionFilter.h"
#import "GPUImageRGBFilter.h"
#import "GPUImageMedianFilter.h"
#import "GPUImageBilateralFilter.h"
#import "GPUImageCrosshairGenerator.h"
#import "GPUImageToneCurveFilter.h"
#import "GPUImageNobleCornerDetectionFilter.h"
#import "GPUImageShiTomasiFeatureDetectionFilter.h"
#import "GPUImageErosionFilter.h"
#import "GPUImageRGBErosionFilter.h"
#import "GPUImageDilationFilter.h"
#import "GPUImageRGBDilationFilter.h"
#import "GPUImageOpeningFilter.h"
#import "GPUImageRGBOpeningFilter.h"
#import "GPUImageClosingFilter.h"
#import "GPUImageRGBClosingFilter.h"
#import "GPUImageColorPackingFilter.h"
#import "GPUImageSphereRefractionFilter.h"
#import "GPUImageMonochromeFilter.h"
#import "GPUImageOpacityFilter.h"
#import "GPUImageHighlightShadowFilter.h"
#import "GPUImageFalseColorFilter.h"
#import "GPUImageHSBFilter.h"
#import "GPUImageHueFilter.h"
#import "GPUImageGlassSphereFilter.h"
#import "GPUImageLookupFilter.h"
#import "GPUImageAmatorkaFilter.h"
#import "GPUImageMissEtikateFilter.h"
#import "GPUImageSoftEleganceFilter.h"
#import "GPUImageAddBlendFilter.h"
#import "GPUImageDivideBlendFilter.h"
#import "GPUImagePolkaDotFilter.h"
#import "GPUImageLocalBinaryPatternFilter.h"
#import "GPUImageLanczosResamplingFilter.h"
#import "GPUImageAverageColor.h"
#import "GPUImageSolidColorGenerator.h"
#import "GPUImageLuminosity.h"
#import "GPUImageAverageLuminanceThresholdFilter.h"
#import "GPUImageWhiteBalanceFilter.h"
#import "GPUImageChromaKeyFilter.h"
#import "GPUImageLowPassFilter.h"
#import "GPUImageHighPassFilter.h"
#import "GPUImageMotionDetector.h"
#import "GPUImageHalftoneFilter.h"
#import "GPUImageThresholdedNonMaximumSuppressionFilter.h"
#import "GPUImageHoughTransformLineDetector.h"
#import "GPUImageParallelCoordinateLineTransformFilter.h"
#import "GPUImageThresholdSketchFilter.h"
#import "GPUImageLineGenerator.h"
#import "GPUImageLinearBurnBlendFilter.h"
#import "GPUImageGaussianBlurPositionFilter.h"
#import "GPUImagePixellatePositionFilter.h"
#import "GPUImageTwoInputCrossTextureSamplingFilter.h"
#import "GPUImagePoissonBlendFilter.h"
#import "GPUImageMotionBlurFilter.h"
#import "GPUImageZoomBlurFilter.h"
#import "GPUImageLaplacianFilter.h"
#import "GPUImageiOSBlurFilter.h"
#import "GPUImageLuminanceRangeFilter.h"
#import "GPUImageDirectionalNonMaximumSuppressionFilter.h"
#import "GPUImageDirectionalSobelEdgeDetectionFilter.h"
#import "GPUImageSingleComponentGaussianBlurFilter.h"
#import "GPUImageThreeInputFilter.h"
#import "GPUImageWeakPixelInclusionFilter.h"

View File

@ -0,0 +1,18 @@
#import "GPUImage3x3TextureSamplingFilter.h"
/** Runs a 3x3 convolution kernel against the image
*/
@interface GPUImage3x3ConvolutionFilter : GPUImage3x3TextureSamplingFilter
{
GLint convolutionMatrixUniform;
}
/** Convolution kernel to run against the image
The convolution kernel is a 3x3 matrix of values to apply to the pixel and its 8 surrounding pixels.
The matrix is specified in row-major order, with the top left pixel being one.one and the bottom right three.three
If the values in the matrix don't add up to 1.0, the image could be brightened or darkened.
*/
@property(readwrite, nonatomic) GPUMatrix3x3 convolutionKernel;
@end

View File

@ -0,0 +1,128 @@
#import "GPUImage3x3ConvolutionFilter.h"
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUImage3x3ConvolutionFragmentShaderString = SHADER_STRING
(
precision highp float;
uniform sampler2D inputImageTexture;
uniform mediump mat3 convolutionMatrix;
varying vec2 textureCoordinate;
varying vec2 leftTextureCoordinate;
varying vec2 rightTextureCoordinate;
varying vec2 topTextureCoordinate;
varying vec2 topLeftTextureCoordinate;
varying vec2 topRightTextureCoordinate;
varying vec2 bottomTextureCoordinate;
varying vec2 bottomLeftTextureCoordinate;
varying vec2 bottomRightTextureCoordinate;
void main()
{
mediump vec3 bottomColor = texture2D(inputImageTexture, bottomTextureCoordinate).rgb;
mediump vec3 bottomLeftColor = texture2D(inputImageTexture, bottomLeftTextureCoordinate).rgb;
mediump vec3 bottomRightColor = texture2D(inputImageTexture, bottomRightTextureCoordinate).rgb;
mediump vec4 centerColor = texture2D(inputImageTexture, textureCoordinate);
mediump vec3 leftColor = texture2D(inputImageTexture, leftTextureCoordinate).rgb;
mediump vec3 rightColor = texture2D(inputImageTexture, rightTextureCoordinate).rgb;
mediump vec3 topColor = texture2D(inputImageTexture, topTextureCoordinate).rgb;
mediump vec3 topRightColor = texture2D(inputImageTexture, topRightTextureCoordinate).rgb;
mediump vec3 topLeftColor = texture2D(inputImageTexture, topLeftTextureCoordinate).rgb;
mediump vec3 resultColor = topLeftColor * convolutionMatrix[0][0] + topColor * convolutionMatrix[0][1] + topRightColor * convolutionMatrix[0][2];
resultColor += leftColor * convolutionMatrix[1][0] + centerColor.rgb * convolutionMatrix[1][1] + rightColor * convolutionMatrix[1][2];
resultColor += bottomLeftColor * convolutionMatrix[2][0] + bottomColor * convolutionMatrix[2][1] + bottomRightColor * convolutionMatrix[2][2];
gl_FragColor = vec4(resultColor, centerColor.a);
}
);
#else
NSString *const kGPUImage3x3ConvolutionFragmentShaderString = SHADER_STRING
(
uniform sampler2D inputImageTexture;
uniform mat3 convolutionMatrix;
varying vec2 textureCoordinate;
varying vec2 leftTextureCoordinate;
varying vec2 rightTextureCoordinate;
varying vec2 topTextureCoordinate;
varying vec2 topLeftTextureCoordinate;
varying vec2 topRightTextureCoordinate;
varying vec2 bottomTextureCoordinate;
varying vec2 bottomLeftTextureCoordinate;
varying vec2 bottomRightTextureCoordinate;
void main()
{
vec3 bottomColor = texture2D(inputImageTexture, bottomTextureCoordinate).rgb;
vec3 bottomLeftColor = texture2D(inputImageTexture, bottomLeftTextureCoordinate).rgb;
vec3 bottomRightColor = texture2D(inputImageTexture, bottomRightTextureCoordinate).rgb;
vec4 centerColor = texture2D(inputImageTexture, textureCoordinate);
vec3 leftColor = texture2D(inputImageTexture, leftTextureCoordinate).rgb;
vec3 rightColor = texture2D(inputImageTexture, rightTextureCoordinate).rgb;
vec3 topColor = texture2D(inputImageTexture, topTextureCoordinate).rgb;
vec3 topRightColor = texture2D(inputImageTexture, topRightTextureCoordinate).rgb;
vec3 topLeftColor = texture2D(inputImageTexture, topLeftTextureCoordinate).rgb;
vec3 resultColor = topLeftColor * convolutionMatrix[0][0] + topColor * convolutionMatrix[0][1] + topRightColor * convolutionMatrix[0][2];
resultColor += leftColor * convolutionMatrix[1][0] + centerColor.rgb * convolutionMatrix[1][1] + rightColor * convolutionMatrix[1][2];
resultColor += bottomLeftColor * convolutionMatrix[2][0] + bottomColor * convolutionMatrix[2][1] + bottomRightColor * convolutionMatrix[2][2];
gl_FragColor = vec4(resultColor, centerColor.a);
}
);
#endif
@implementation GPUImage3x3ConvolutionFilter
@synthesize convolutionKernel = _convolutionKernel;
#pragma mark -
#pragma mark Initialization and teardown
- (id)init;
{
if (!(self = [self initWithFragmentShaderFromString:kGPUImage3x3ConvolutionFragmentShaderString]))
{
return nil;
}
self.convolutionKernel = (GPUMatrix3x3){
{0.f, 0.f, 0.f},
{0.f, 1.f, 0.f},
{0.f, 0.f, 0.f}
};
return self;
}
- (id)initWithFragmentShaderFromString:(NSString *)fragmentShaderString;
{
if (!(self = [super initWithFragmentShaderFromString:fragmentShaderString]))
{
return nil;
}
convolutionMatrixUniform = [filterProgram uniformIndex:@"convolutionMatrix"];
return self;
}
#pragma mark -
#pragma mark Accessors
- (void)setConvolutionKernel:(GPUMatrix3x3)newValue;
{
_convolutionKernel = newValue;
[self setMatrix3f:_convolutionKernel forUniform:convolutionMatrixUniform program:filterProgram];
}
@end

View File

@ -0,0 +1,18 @@
#import "GPUImageFilter.h"
extern NSString *const kGPUImageNearbyTexelSamplingVertexShaderString;
@interface GPUImage3x3TextureSamplingFilter : GPUImageFilter
{
GLint texelWidthUniform, texelHeightUniform;
CGFloat texelWidth, texelHeight;
BOOL hasOverriddenImageSizeFactor;
}
// The texel width and height determines how far out to sample from this texel. By default, this is the normalized width of a pixel, but this can be overridden for different effects.
@property(readwrite, nonatomic) CGFloat texelWidth;
@property(readwrite, nonatomic) CGFloat texelHeight;
@end

View File

@ -0,0 +1,121 @@
#import "GPUImage3x3TextureSamplingFilter.h"
// Override vertex shader to remove dependent texture reads
NSString *const kGPUImageNearbyTexelSamplingVertexShaderString = SHADER_STRING
(
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
uniform float texelWidth;
uniform float texelHeight;
varying vec2 textureCoordinate;
varying vec2 leftTextureCoordinate;
varying vec2 rightTextureCoordinate;
varying vec2 topTextureCoordinate;
varying vec2 topLeftTextureCoordinate;
varying vec2 topRightTextureCoordinate;
varying vec2 bottomTextureCoordinate;
varying vec2 bottomLeftTextureCoordinate;
varying vec2 bottomRightTextureCoordinate;
void main()
{
gl_Position = position;
vec2 widthStep = vec2(texelWidth, 0.0);
vec2 heightStep = vec2(0.0, texelHeight);
vec2 widthHeightStep = vec2(texelWidth, texelHeight);
vec2 widthNegativeHeightStep = vec2(texelWidth, -texelHeight);
textureCoordinate = inputTextureCoordinate.xy;
leftTextureCoordinate = inputTextureCoordinate.xy - widthStep;
rightTextureCoordinate = inputTextureCoordinate.xy + widthStep;
topTextureCoordinate = inputTextureCoordinate.xy - heightStep;
topLeftTextureCoordinate = inputTextureCoordinate.xy - widthHeightStep;
topRightTextureCoordinate = inputTextureCoordinate.xy + widthNegativeHeightStep;
bottomTextureCoordinate = inputTextureCoordinate.xy + heightStep;
bottomLeftTextureCoordinate = inputTextureCoordinate.xy - widthNegativeHeightStep;
bottomRightTextureCoordinate = inputTextureCoordinate.xy + widthHeightStep;
}
);
@implementation GPUImage3x3TextureSamplingFilter
@synthesize texelWidth = _texelWidth;
@synthesize texelHeight = _texelHeight;
#pragma mark -
#pragma mark Initialization and teardown
- (id)initWithVertexShaderFromString:(NSString *)vertexShaderString fragmentShaderFromString:(NSString *)fragmentShaderString;
{
if (!(self = [super initWithVertexShaderFromString:vertexShaderString fragmentShaderFromString:fragmentShaderString]))
{
return nil;
}
texelWidthUniform = [filterProgram uniformIndex:@"texelWidth"];
texelHeightUniform = [filterProgram uniformIndex:@"texelHeight"];
return self;
}
- (id)initWithFragmentShaderFromString:(NSString *)fragmentShaderString;
{
if (!(self = [self initWithVertexShaderFromString:kGPUImageNearbyTexelSamplingVertexShaderString fragmentShaderFromString:fragmentShaderString]))
{
return nil;
}
return self;
}
- (void)setupFilterForSize:(CGSize)filterFrameSize;
{
if (!hasOverriddenImageSizeFactor)
{
_texelWidth = 1.0 / filterFrameSize.width;
_texelHeight = 1.0 / filterFrameSize.height;
runSynchronouslyOnVideoProcessingQueue(^{
[GPUImageContext setActiveShaderProgram:filterProgram];
if (GPUImageRotationSwapsWidthAndHeight(inputRotation))
{
glUniform1f(texelWidthUniform, _texelHeight);
glUniform1f(texelHeightUniform, _texelWidth);
}
else
{
glUniform1f(texelWidthUniform, _texelWidth);
glUniform1f(texelHeightUniform, _texelHeight);
}
});
}
}
#pragma mark -
#pragma mark Accessors
- (void)setTexelWidth:(CGFloat)newValue;
{
hasOverriddenImageSizeFactor = YES;
_texelWidth = newValue;
[self setFloat:_texelWidth forUniform:texelWidthUniform program:filterProgram];
}
- (void)setTexelHeight:(CGFloat)newValue;
{
hasOverriddenImageSizeFactor = YES;
_texelHeight = newValue;
[self setFloat:_texelHeight forUniform:texelHeightUniform program:filterProgram];
}
@end

View File

@ -0,0 +1,9 @@
#import "GPUImageFilterGroup.h"
@interface GPUImageAdaptiveThresholdFilter : GPUImageFilterGroup
/** A multiplier for the background averaging blur radius in pixels, with a default of 4
*/
@property(readwrite, nonatomic) CGFloat blurRadiusInPixels;
@end

View File

@ -0,0 +1,100 @@
#import "GPUImageAdaptiveThresholdFilter.h"
#import "GPUImageFilter.h"
#import "GPUImageTwoInputFilter.h"
#import "GPUImageGrayscaleFilter.h"
#import "GPUImageBoxBlurFilter.h"
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUImageAdaptiveThresholdFragmentShaderString = SHADER_STRING
(
varying highp vec2 textureCoordinate;
varying highp vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
void main()
{
highp float blurredInput = texture2D(inputImageTexture, textureCoordinate).r;
highp float localLuminance = texture2D(inputImageTexture2, textureCoordinate2).r;
highp float thresholdResult = step(blurredInput - 0.05, localLuminance);
gl_FragColor = vec4(vec3(thresholdResult), 1.0);
}
);
#else
NSString *const kGPUImageAdaptiveThresholdFragmentShaderString = SHADER_STRING
(
varying vec2 textureCoordinate;
varying vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
void main()
{
float blurredInput = texture2D(inputImageTexture, textureCoordinate).r;
float localLuminance = texture2D(inputImageTexture2, textureCoordinate2).r;
float thresholdResult = step(blurredInput - 0.05, localLuminance);
gl_FragColor = vec4(vec3(thresholdResult), 1.0);
}
);
#endif
@interface GPUImageAdaptiveThresholdFilter()
{
GPUImageBoxBlurFilter *boxBlurFilter;
}
@end
@implementation GPUImageAdaptiveThresholdFilter
#pragma mark -
#pragma mark Initialization and teardown
- (id)init;
{
if (!(self = [super init]))
{
return nil;
}
// First pass: reduce to luminance
GPUImageGrayscaleFilter *luminanceFilter = [[GPUImageGrayscaleFilter alloc] init];
[self addFilter:luminanceFilter];
// Second pass: perform a box blur
boxBlurFilter = [[GPUImageBoxBlurFilter alloc] init];
[self addFilter:boxBlurFilter];
// Third pass: compare the blurred background luminance to the local value
GPUImageFilter *adaptiveThresholdFilter = [[GPUImageTwoInputFilter alloc] initWithFragmentShaderFromString:kGPUImageAdaptiveThresholdFragmentShaderString];
[self addFilter:adaptiveThresholdFilter];
[luminanceFilter addTarget:boxBlurFilter];
[boxBlurFilter addTarget:adaptiveThresholdFilter];
// To prevent double updating of this filter, disable updates from the sharp luminance image side
[luminanceFilter addTarget:adaptiveThresholdFilter];
self.initialFilters = [NSArray arrayWithObject:luminanceFilter];
self.terminalFilter = adaptiveThresholdFilter;
return self;
}
#pragma mark -
#pragma mark Accessors
- (void)setBlurRadiusInPixels:(CGFloat)newValue;
{
boxBlurFilter.blurRadiusInPixels = newValue;
}
- (CGFloat)blurRadiusInPixels;
{
return boxBlurFilter.blurRadiusInPixels;
}
@end

View File

@ -0,0 +1,5 @@
#import "GPUImageTwoInputFilter.h"
@interface GPUImageAddBlendFilter : GPUImageTwoInputFilter
@end

View File

@ -0,0 +1,100 @@
#import "GPUImageAddBlendFilter.h"
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUImageAddBlendFragmentShaderString = SHADER_STRING
(
varying highp vec2 textureCoordinate;
varying highp vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
void main()
{
lowp vec4 base = texture2D(inputImageTexture, textureCoordinate);
lowp vec4 overlay = texture2D(inputImageTexture2, textureCoordinate2);
mediump float r;
if (overlay.r * base.a + base.r * overlay.a >= overlay.a * base.a) {
r = overlay.a * base.a + overlay.r * (1.0 - base.a) + base.r * (1.0 - overlay.a);
} else {
r = overlay.r + base.r;
}
mediump float g;
if (overlay.g * base.a + base.g * overlay.a >= overlay.a * base.a) {
g = overlay.a * base.a + overlay.g * (1.0 - base.a) + base.g * (1.0 - overlay.a);
} else {
g = overlay.g + base.g;
}
mediump float b;
if (overlay.b * base.a + base.b * overlay.a >= overlay.a * base.a) {
b = overlay.a * base.a + overlay.b * (1.0 - base.a) + base.b * (1.0 - overlay.a);
} else {
b = overlay.b + base.b;
}
mediump float a = overlay.a + base.a - overlay.a * base.a;
gl_FragColor = vec4(r, g, b, a);
}
);
#else
NSString *const kGPUImageAddBlendFragmentShaderString = SHADER_STRING
(
varying vec2 textureCoordinate;
varying vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
void main()
{
vec4 base = texture2D(inputImageTexture, textureCoordinate);
vec4 overlay = texture2D(inputImageTexture2, textureCoordinate2);
float r;
if (overlay.r * base.a + base.r * overlay.a >= overlay.a * base.a) {
r = overlay.a * base.a + overlay.r * (1.0 - base.a) + base.r * (1.0 - overlay.a);
} else {
r = overlay.r + base.r;
}
float g;
if (overlay.g * base.a + base.g * overlay.a >= overlay.a * base.a) {
g = overlay.a * base.a + overlay.g * (1.0 - base.a) + base.g * (1.0 - overlay.a);
} else {
g = overlay.g + base.g;
}
float b;
if (overlay.b * base.a + base.b * overlay.a >= overlay.a * base.a) {
b = overlay.a * base.a + overlay.b * (1.0 - base.a) + base.b * (1.0 - overlay.a);
} else {
b = overlay.b + base.b;
}
float a = overlay.a + base.a - overlay.a * base.a;
gl_FragColor = vec4(r, g, b, a);
}
);
#endif
@implementation GPUImageAddBlendFilter
- (id)init;
{
if (!(self = [super initWithFragmentShaderFromString:kGPUImageAddBlendFragmentShaderString]))
{
return nil;
}
return self;
}
@end

View File

@ -0,0 +1,11 @@
#import "GPUImageTwoInputFilter.h"
@interface GPUImageAlphaBlendFilter : GPUImageTwoInputFilter
{
GLint mixUniform;
}
// Mix ranges from 0.0 (only image 1) to 1.0 (only image 2), with 1.0 as the normal level
@property(readwrite, nonatomic) CGFloat mix;
@end

View File

@ -0,0 +1,72 @@
#import "GPUImageAlphaBlendFilter.h"
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUImageAlphaBlendFragmentShaderString = SHADER_STRING
(
varying highp vec2 textureCoordinate;
varying highp vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
uniform lowp float mixturePercent;
void main()
{
lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
lowp vec4 textureColor2 = texture2D(inputImageTexture2, textureCoordinate2);
gl_FragColor = vec4(mix(textureColor.rgb, textureColor2.rgb, textureColor2.a * mixturePercent), textureColor.a);
}
);
#else
NSString *const kGPUImageAlphaBlendFragmentShaderString = SHADER_STRING
(
varying vec2 textureCoordinate;
varying vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
uniform float mixturePercent;
void main()
{
vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
vec4 textureColor2 = texture2D(inputImageTexture2, textureCoordinate2);
gl_FragColor = vec4(mix(textureColor.rgb, textureColor2.rgb, textureColor2.a * mixturePercent), textureColor.a);
}
);
#endif
@implementation GPUImageAlphaBlendFilter
@synthesize mix = _mix;
- (id)init;
{
if (!(self = [super initWithFragmentShaderFromString:kGPUImageAlphaBlendFragmentShaderString]))
{
return nil;
}
mixUniform = [filterProgram uniformIndex:@"mixturePercent"];
self.mix = 0.5;
return self;
}
#pragma mark -
#pragma mark Accessors
- (void)setMix:(CGFloat)newValue;
{
_mix = newValue;
[self setFloat:_mix forUniform:mixUniform program:filterProgram];
}
@end

View File

@ -0,0 +1,17 @@
#import "GPUImageFilterGroup.h"
@class GPUImagePicture;
/** A photo filter based on Photoshop action by Amatorka
http://amatorka.deviantart.com/art/Amatorka-Action-2-121069631
*/
// Note: If you want to use this effect you have to add lookup_amatorka.png
// from Resources folder to your application bundle.
@interface GPUImageAmatorkaFilter : GPUImageFilterGroup
{
GPUImagePicture *lookupImageSource;
}
@end

View File

@ -0,0 +1,38 @@
#import "GPUImageAmatorkaFilter.h"
#import "GPUImagePicture.h"
#import "GPUImageLookupFilter.h"
@implementation GPUImageAmatorkaFilter
- (id)init;
{
if (!(self = [super init]))
{
return nil;
}
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
UIImage *image = [UIImage imageNamed:@"lookup_amatorka.png"];
#else
NSImage *image = [NSImage imageNamed:@"lookup_amatorka.png"];
#endif
NSAssert(image, @"To use GPUImageAmatorkaFilter you need to add lookup_amatorka.png from GPUImage/framework/Resources to your application bundle.");
lookupImageSource = [[GPUImagePicture alloc] initWithImage:image];
GPUImageLookupFilter *lookupFilter = [[GPUImageLookupFilter alloc] init];
[self addFilter:lookupFilter];
[lookupImageSource addTarget:lookupFilter atTextureLocation:1];
[lookupImageSource processImage];
self.initialFilters = [NSArray arrayWithObjects:lookupFilter, nil];
self.terminalFilter = lookupFilter;
return self;
}
#pragma mark -
#pragma mark Accessors
@end

View File

@ -0,0 +1,20 @@
#import "GPUImageFilter.h"
extern NSString *const kGPUImageColorAveragingVertexShaderString;
@interface GPUImageAverageColor : GPUImageFilter
{
GLint texelWidthUniform, texelHeightUniform;
NSUInteger numberOfStages;
GLubyte *rawImagePixels;
CGSize finalStageSize;
}
// This block is called on the completion of color averaging for a frame
@property(nonatomic, copy) void(^colorAverageProcessingFinishedBlock)(CGFloat redComponent, CGFloat greenComponent, CGFloat blueComponent, CGFloat alphaComponent, CMTime frameTime);
- (void)extractAverageColorAtFrameTime:(CMTime)frameTime;
@end

View File

@ -0,0 +1,210 @@
#import "GPUImageAverageColor.h"
NSString *const kGPUImageColorAveragingVertexShaderString = SHADER_STRING
(
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
uniform float texelWidth;
uniform float texelHeight;
varying vec2 upperLeftInputTextureCoordinate;
varying vec2 upperRightInputTextureCoordinate;
varying vec2 lowerLeftInputTextureCoordinate;
varying vec2 lowerRightInputTextureCoordinate;
void main()
{
gl_Position = position;
upperLeftInputTextureCoordinate = inputTextureCoordinate.xy + vec2(-texelWidth, -texelHeight);
upperRightInputTextureCoordinate = inputTextureCoordinate.xy + vec2(texelWidth, -texelHeight);
lowerLeftInputTextureCoordinate = inputTextureCoordinate.xy + vec2(-texelWidth, texelHeight);
lowerRightInputTextureCoordinate = inputTextureCoordinate.xy + vec2(texelWidth, texelHeight);
}
);
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUImageColorAveragingFragmentShaderString = SHADER_STRING
(
precision highp float;
uniform sampler2D inputImageTexture;
varying highp vec2 outputTextureCoordinate;
varying highp vec2 upperLeftInputTextureCoordinate;
varying highp vec2 upperRightInputTextureCoordinate;
varying highp vec2 lowerLeftInputTextureCoordinate;
varying highp vec2 lowerRightInputTextureCoordinate;
void main()
{
highp vec4 upperLeftColor = texture2D(inputImageTexture, upperLeftInputTextureCoordinate);
highp vec4 upperRightColor = texture2D(inputImageTexture, upperRightInputTextureCoordinate);
highp vec4 lowerLeftColor = texture2D(inputImageTexture, lowerLeftInputTextureCoordinate);
highp vec4 lowerRightColor = texture2D(inputImageTexture, lowerRightInputTextureCoordinate);
gl_FragColor = 0.25 * (upperLeftColor + upperRightColor + lowerLeftColor + lowerRightColor);
}
);
#else
NSString *const kGPUImageColorAveragingFragmentShaderString = SHADER_STRING
(
uniform sampler2D inputImageTexture;
varying vec2 outputTextureCoordinate;
varying vec2 upperLeftInputTextureCoordinate;
varying vec2 upperRightInputTextureCoordinate;
varying vec2 lowerLeftInputTextureCoordinate;
varying vec2 lowerRightInputTextureCoordinate;
void main()
{
vec4 upperLeftColor = texture2D(inputImageTexture, upperLeftInputTextureCoordinate);
vec4 upperRightColor = texture2D(inputImageTexture, upperRightInputTextureCoordinate);
vec4 lowerLeftColor = texture2D(inputImageTexture, lowerLeftInputTextureCoordinate);
vec4 lowerRightColor = texture2D(inputImageTexture, lowerRightInputTextureCoordinate);
gl_FragColor = 0.25 * (upperLeftColor + upperRightColor + lowerLeftColor + lowerRightColor);
}
);
#endif
@implementation GPUImageAverageColor
@synthesize colorAverageProcessingFinishedBlock = _colorAverageProcessingFinishedBlock;
#pragma mark -
#pragma mark Initialization and teardown
- (id)init;
{
if (!(self = [super initWithVertexShaderFromString:kGPUImageColorAveragingVertexShaderString fragmentShaderFromString:kGPUImageColorAveragingFragmentShaderString]))
{
return nil;
}
texelWidthUniform = [filterProgram uniformIndex:@"texelWidth"];
texelHeightUniform = [filterProgram uniformIndex:@"texelHeight"];
finalStageSize = CGSizeMake(1.0, 1.0);
__unsafe_unretained GPUImageAverageColor *weakSelf = self;
[self setFrameProcessingCompletionBlock:^(GPUImageOutput *filter, CMTime frameTime) {
[weakSelf extractAverageColorAtFrameTime:frameTime];
}];
return self;
}
- (void)dealloc;
{
if (rawImagePixels != NULL)
{
free(rawImagePixels);
}
}
#pragma mark -
#pragma mark Managing the display FBOs
- (void)renderToTextureWithVertices:(const GLfloat *)vertices textureCoordinates:(const GLfloat *)textureCoordinates;
{
if (self.preventRendering)
{
[firstInputFramebuffer unlock];
return;
}
outputFramebuffer = nil;
[GPUImageContext setActiveShaderProgram:filterProgram];
glVertexAttribPointer(filterPositionAttribute, 2, GL_FLOAT, 0, 0, vertices);
glVertexAttribPointer(filterTextureCoordinateAttribute, 2, GL_FLOAT, 0, 0, textureCoordinates);
GLuint currentTexture = [firstInputFramebuffer texture];
NSUInteger numberOfReductionsInX = floor(log(inputTextureSize.width) / log(4.0));
NSUInteger numberOfReductionsInY = floor(log(inputTextureSize.height) / log(4.0));
NSUInteger reductionsToHitSideLimit = MIN(numberOfReductionsInX, numberOfReductionsInY);
for (NSUInteger currentReduction = 0; currentReduction < reductionsToHitSideLimit; currentReduction++)
{
CGSize currentStageSize = CGSizeMake(floor(inputTextureSize.width / pow(4.0, currentReduction + 1.0)), floor(inputTextureSize.height / pow(4.0, currentReduction + 1.0)));
if ( (currentStageSize.height < 2.0) || (currentStageSize.width < 2.0) )
{
// A really small last stage seems to cause significant errors in the average, so I abort and leave the rest to the CPU at this point
break;
// currentStageSize.height = 2.0; // TODO: Rotate the image to account for this case, which causes FBO construction to fail
}
[outputFramebuffer unlock];
outputFramebuffer = [[GPUImageContext sharedFramebufferCache] fetchFramebufferForSize:currentStageSize textureOptions:self.outputTextureOptions onlyTexture:NO];
[outputFramebuffer activateFramebuffer];
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, currentTexture);
glUniform1i(filterInputTextureUniform, 2);
glUniform1f(texelWidthUniform, 0.5 / currentStageSize.width);
glUniform1f(texelHeightUniform, 0.5 / currentStageSize.height);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
currentTexture = [outputFramebuffer texture];
finalStageSize = currentStageSize;
}
[firstInputFramebuffer unlock];
}
- (void)setInputRotation:(GPUImageRotationMode)newInputRotation atIndex:(NSInteger)textureIndex;
{
inputRotation = kGPUImageNoRotation;
}
- (void)extractAverageColorAtFrameTime:(CMTime)frameTime;
{
runSynchronouslyOnVideoProcessingQueue(^{
// we need a normal color texture for averaging the color values
NSAssert(self.outputTextureOptions.internalFormat == GL_RGBA, @"The output texture internal format for this filter must be GL_RGBA.");
NSAssert(self.outputTextureOptions.type == GL_UNSIGNED_BYTE, @"The type of the output texture of this filter must be GL_UNSIGNED_BYTE.");
NSUInteger totalNumberOfPixels = round(finalStageSize.width * finalStageSize.height);
if (rawImagePixels == NULL)
{
rawImagePixels = (GLubyte *)malloc(totalNumberOfPixels * 4);
}
[GPUImageContext useImageProcessingContext];
[outputFramebuffer activateFramebuffer];
glReadPixels(0, 0, (int)finalStageSize.width, (int)finalStageSize.height, GL_RGBA, GL_UNSIGNED_BYTE, rawImagePixels);
NSUInteger redTotal = 0, greenTotal = 0, blueTotal = 0, alphaTotal = 0;
NSUInteger byteIndex = 0;
for (NSUInteger currentPixel = 0; currentPixel < totalNumberOfPixels; currentPixel++)
{
redTotal += rawImagePixels[byteIndex++];
greenTotal += rawImagePixels[byteIndex++];
blueTotal += rawImagePixels[byteIndex++];
alphaTotal += rawImagePixels[byteIndex++];
}
CGFloat normalizedRedTotal = (CGFloat)redTotal / (CGFloat)totalNumberOfPixels / 255.0;
CGFloat normalizedGreenTotal = (CGFloat)greenTotal / (CGFloat)totalNumberOfPixels / 255.0;
CGFloat normalizedBlueTotal = (CGFloat)blueTotal / (CGFloat)totalNumberOfPixels / 255.0;
CGFloat normalizedAlphaTotal = (CGFloat)alphaTotal / (CGFloat)totalNumberOfPixels / 255.0;
if (_colorAverageProcessingFinishedBlock != NULL)
{
_colorAverageProcessingFinishedBlock(normalizedRedTotal, normalizedGreenTotal, normalizedBlueTotal, normalizedAlphaTotal, frameTime);
}
});
}
@end

View File

@ -0,0 +1,8 @@
#import "GPUImageFilterGroup.h"
@interface GPUImageAverageLuminanceThresholdFilter : GPUImageFilterGroup
// This is multiplied by the continually calculated average image luminosity to arrive at the final threshold. Default is 1.0.
@property(readwrite, nonatomic) CGFloat thresholdMultiplier;
@end

View File

@ -0,0 +1,47 @@
#import "GPUImageAverageLuminanceThresholdFilter.h"
#import "GPUImageLuminosity.h"
#import "GPUImageLuminanceThresholdFilter.h"
@interface GPUImageAverageLuminanceThresholdFilter()
{
GPUImageLuminosity *luminosityFilter;
GPUImageLuminanceThresholdFilter *luminanceThresholdFilter;
}
@end
@implementation GPUImageAverageLuminanceThresholdFilter
@synthesize thresholdMultiplier = _thresholdMultiplier;
#pragma mark -
#pragma mark Initialization and teardown
- (id)init;
{
if (!(self = [super init]))
{
return nil;
}
self.thresholdMultiplier = 1.0;
luminosityFilter = [[GPUImageLuminosity alloc] init];
[self addFilter:luminosityFilter];
luminanceThresholdFilter = [[GPUImageLuminanceThresholdFilter alloc] init];
[self addFilter:luminanceThresholdFilter];
__unsafe_unretained GPUImageAverageLuminanceThresholdFilter *weakSelf = self;
__unsafe_unretained GPUImageLuminanceThresholdFilter *weakThreshold = luminanceThresholdFilter;
[luminosityFilter setLuminosityProcessingFinishedBlock:^(CGFloat luminosity, CMTime frameTime) {
weakThreshold.threshold = luminosity * weakSelf.thresholdMultiplier;
}];
self.initialFilters = [NSArray arrayWithObjects:luminosityFilter, luminanceThresholdFilter, nil];
self.terminalFilter = luminanceThresholdFilter;
return self;
}
@end

View File

@ -0,0 +1,10 @@
#import "GPUImageGaussianBlurFilter.h"
@interface GPUImageBilateralFilter : GPUImageGaussianBlurFilter
{
CGFloat firstDistanceNormalizationFactorUniform;
CGFloat secondDistanceNormalizationFactorUniform;
}
// A normalization factor for the distance between central color and sample color.
@property(nonatomic, readwrite) CGFloat distanceNormalizationFactor;
@end

View File

@ -0,0 +1,231 @@
#import "GPUImageBilateralFilter.h"
NSString *const kGPUImageBilateralBlurVertexShaderString = SHADER_STRING
(
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
const int GAUSSIAN_SAMPLES = 9;
uniform float texelWidthOffset;
uniform float texelHeightOffset;
varying vec2 textureCoordinate;
varying vec2 blurCoordinates[GAUSSIAN_SAMPLES];
void main()
{
gl_Position = position;
textureCoordinate = inputTextureCoordinate.xy;
// Calculate the positions for the blur
int multiplier = 0;
vec2 blurStep;
vec2 singleStepOffset = vec2(texelWidthOffset, texelHeightOffset);
for (int i = 0; i < GAUSSIAN_SAMPLES; i++)
{
multiplier = (i - ((GAUSSIAN_SAMPLES - 1) / 2));
// Blur in x (horizontal)
blurStep = float(multiplier) * singleStepOffset;
blurCoordinates[i] = inputTextureCoordinate.xy + blurStep;
}
}
);
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUImageBilateralFilterFragmentShaderString = SHADER_STRING
(
uniform sampler2D inputImageTexture;
const lowp int GAUSSIAN_SAMPLES = 9;
varying highp vec2 textureCoordinate;
varying highp vec2 blurCoordinates[GAUSSIAN_SAMPLES];
uniform mediump float distanceNormalizationFactor;
void main()
{
lowp vec4 centralColor;
lowp float gaussianWeightTotal;
lowp vec4 sum;
lowp vec4 sampleColor;
lowp float distanceFromCentralColor;
lowp float gaussianWeight;
centralColor = texture2D(inputImageTexture, blurCoordinates[4]);
gaussianWeightTotal = 0.18;
sum = centralColor * 0.18;
sampleColor = texture2D(inputImageTexture, blurCoordinates[0]);
distanceFromCentralColor = min(distance(centralColor, sampleColor) * distanceNormalizationFactor, 1.0);
gaussianWeight = 0.05 * (1.0 - distanceFromCentralColor);
gaussianWeightTotal += gaussianWeight;
sum += sampleColor * gaussianWeight;
sampleColor = texture2D(inputImageTexture, blurCoordinates[1]);
distanceFromCentralColor = min(distance(centralColor, sampleColor) * distanceNormalizationFactor, 1.0);
gaussianWeight = 0.09 * (1.0 - distanceFromCentralColor);
gaussianWeightTotal += gaussianWeight;
sum += sampleColor * gaussianWeight;
sampleColor = texture2D(inputImageTexture, blurCoordinates[2]);
distanceFromCentralColor = min(distance(centralColor, sampleColor) * distanceNormalizationFactor, 1.0);
gaussianWeight = 0.12 * (1.0 - distanceFromCentralColor);
gaussianWeightTotal += gaussianWeight;
sum += sampleColor * gaussianWeight;
sampleColor = texture2D(inputImageTexture, blurCoordinates[3]);
distanceFromCentralColor = min(distance(centralColor, sampleColor) * distanceNormalizationFactor, 1.0);
gaussianWeight = 0.15 * (1.0 - distanceFromCentralColor);
gaussianWeightTotal += gaussianWeight;
sum += sampleColor * gaussianWeight;
sampleColor = texture2D(inputImageTexture, blurCoordinates[5]);
distanceFromCentralColor = min(distance(centralColor, sampleColor) * distanceNormalizationFactor, 1.0);
gaussianWeight = 0.15 * (1.0 - distanceFromCentralColor);
gaussianWeightTotal += gaussianWeight;
sum += sampleColor * gaussianWeight;
sampleColor = texture2D(inputImageTexture, blurCoordinates[6]);
distanceFromCentralColor = min(distance(centralColor, sampleColor) * distanceNormalizationFactor, 1.0);
gaussianWeight = 0.12 * (1.0 - distanceFromCentralColor);
gaussianWeightTotal += gaussianWeight;
sum += sampleColor * gaussianWeight;
sampleColor = texture2D(inputImageTexture, blurCoordinates[7]);
distanceFromCentralColor = min(distance(centralColor, sampleColor) * distanceNormalizationFactor, 1.0);
gaussianWeight = 0.09 * (1.0 - distanceFromCentralColor);
gaussianWeightTotal += gaussianWeight;
sum += sampleColor * gaussianWeight;
sampleColor = texture2D(inputImageTexture, blurCoordinates[8]);
distanceFromCentralColor = min(distance(centralColor, sampleColor) * distanceNormalizationFactor, 1.0);
gaussianWeight = 0.05 * (1.0 - distanceFromCentralColor);
gaussianWeightTotal += gaussianWeight;
sum += sampleColor * gaussianWeight;
gl_FragColor = sum / gaussianWeightTotal;
}
);
#else
NSString *const kGPUImageBilateralFilterFragmentShaderString = SHADER_STRING
(
uniform sampler2D inputImageTexture;
const int GAUSSIAN_SAMPLES = 9;
varying vec2 textureCoordinate;
varying vec2 blurCoordinates[GAUSSIAN_SAMPLES];
uniform float distanceNormalizationFactor;
void main()
{
vec4 centralColor;
float gaussianWeightTotal;
vec4 sum;
vec4 sampleColor;
float distanceFromCentralColor;
float gaussianWeight;
centralColor = texture2D(inputImageTexture, blurCoordinates[4]);
gaussianWeightTotal = 0.18;
sum = centralColor * 0.18;
sampleColor = texture2D(inputImageTexture, blurCoordinates[0]);
distanceFromCentralColor = min(distance(centralColor, sampleColor) * distanceNormalizationFactor, 1.0);
gaussianWeight = 0.05 * (1.0 - distanceFromCentralColor);
gaussianWeightTotal += gaussianWeight;
sum += sampleColor * gaussianWeight;
sampleColor = texture2D(inputImageTexture, blurCoordinates[1]);
distanceFromCentralColor = min(distance(centralColor, sampleColor) * distanceNormalizationFactor, 1.0);
gaussianWeight = 0.09 * (1.0 - distanceFromCentralColor);
gaussianWeightTotal += gaussianWeight;
sum += sampleColor * gaussianWeight;
sampleColor = texture2D(inputImageTexture, blurCoordinates[2]);
distanceFromCentralColor = min(distance(centralColor, sampleColor) * distanceNormalizationFactor, 1.0);
gaussianWeight = 0.12 * (1.0 - distanceFromCentralColor);
gaussianWeightTotal += gaussianWeight;
sum += sampleColor * gaussianWeight;
sampleColor = texture2D(inputImageTexture, blurCoordinates[3]);
distanceFromCentralColor = min(distance(centralColor, sampleColor) * distanceNormalizationFactor, 1.0);
gaussianWeight = 0.15 * (1.0 - distanceFromCentralColor);
gaussianWeightTotal += gaussianWeight;
sum += sampleColor * gaussianWeight;
sampleColor = texture2D(inputImageTexture, blurCoordinates[5]);
distanceFromCentralColor = min(distance(centralColor, sampleColor) * distanceNormalizationFactor, 1.0);
gaussianWeight = 0.15 * (1.0 - distanceFromCentralColor);
gaussianWeightTotal += gaussianWeight;
sum += sampleColor * gaussianWeight;
sampleColor = texture2D(inputImageTexture, blurCoordinates[6]);
distanceFromCentralColor = min(distance(centralColor, sampleColor) * distanceNormalizationFactor, 1.0);
gaussianWeight = 0.12 * (1.0 - distanceFromCentralColor);
gaussianWeightTotal += gaussianWeight;
sum += sampleColor * gaussianWeight;
sampleColor = texture2D(inputImageTexture, blurCoordinates[7]);
distanceFromCentralColor = min(distance(centralColor, sampleColor) * distanceNormalizationFactor, 1.0);
gaussianWeight = 0.09 * (1.0 - distanceFromCentralColor);
gaussianWeightTotal += gaussianWeight;
sum += sampleColor * gaussianWeight;
sampleColor = texture2D(inputImageTexture, blurCoordinates[8]);
distanceFromCentralColor = min(distance(centralColor, sampleColor) * distanceNormalizationFactor, 1.0);
gaussianWeight = 0.05 * (1.0 - distanceFromCentralColor);
gaussianWeightTotal += gaussianWeight;
sum += sampleColor * gaussianWeight;
gl_FragColor = sum / gaussianWeightTotal;
}
);
#endif
@implementation GPUImageBilateralFilter
@synthesize distanceNormalizationFactor = _distanceNormalizationFactor;
- (id)init;
{
if (!(self = [super initWithFirstStageVertexShaderFromString:kGPUImageBilateralBlurVertexShaderString
firstStageFragmentShaderFromString:kGPUImageBilateralFilterFragmentShaderString
secondStageVertexShaderFromString:kGPUImageBilateralBlurVertexShaderString
secondStageFragmentShaderFromString:kGPUImageBilateralFilterFragmentShaderString])) {
return nil;
}
firstDistanceNormalizationFactorUniform = [filterProgram uniformIndex:@"distanceNormalizationFactor"];
secondDistanceNormalizationFactorUniform = [filterProgram uniformIndex:@"distanceNormalizationFactor"];
self.texelSpacingMultiplier = 4.0;
self.distanceNormalizationFactor = 8.0;
return self;
}
#pragma mark -
#pragma mark Accessors
- (void)setDistanceNormalizationFactor:(CGFloat)newValue
{
_distanceNormalizationFactor = newValue;
[self setFloat:newValue
forUniform:firstDistanceNormalizationFactorUniform
program:filterProgram];
[self setFloat:newValue
forUniform:secondDistanceNormalizationFactorUniform
program:secondFilterProgram];
}
@end

View File

@ -0,0 +1,7 @@
#import "GPUImageGaussianBlurFilter.h"
/** A hardware-accelerated box blur of an image
*/
@interface GPUImageBoxBlurFilter : GPUImageGaussianBlurFilter
@end

View File

@ -0,0 +1,178 @@
#import "GPUImageBoxBlurFilter.h"
@implementation GPUImageBoxBlurFilter
+ (NSString *)vertexShaderForOptimizedBlurOfRadius:(NSUInteger)blurRadius sigma:(CGFloat)sigma;
{
if (blurRadius < 1)
{
return kGPUImageVertexShaderString;
}
// From these weights we calculate the offsets to read interpolated values from
NSUInteger numberOfOptimizedOffsets = MIN(blurRadius / 2 + (blurRadius % 2), 7);
NSMutableString *shaderString = [[NSMutableString alloc] init];
// Header
[shaderString appendFormat:@"\
attribute vec4 position;\n\
attribute vec4 inputTextureCoordinate;\n\
\n\
uniform float texelWidthOffset;\n\
uniform float texelHeightOffset;\n\
\n\
varying vec2 blurCoordinates[%lu];\n\
\n\
void main()\n\
{\n\
gl_Position = position;\n\
\n\
vec2 singleStepOffset = vec2(texelWidthOffset, texelHeightOffset);\n", (unsigned long)(1 + (numberOfOptimizedOffsets * 2))];
// Inner offset loop
[shaderString appendString:@"blurCoordinates[0] = inputTextureCoordinate.xy;\n"];
for (NSUInteger currentOptimizedOffset = 0; currentOptimizedOffset < numberOfOptimizedOffsets; currentOptimizedOffset++)
{
GLfloat optimizedOffset = (GLfloat)(currentOptimizedOffset * 2) + 1.5;
[shaderString appendFormat:@"\
blurCoordinates[%lu] = inputTextureCoordinate.xy + singleStepOffset * %f;\n\
blurCoordinates[%lu] = inputTextureCoordinate.xy - singleStepOffset * %f;\n", (unsigned long)((currentOptimizedOffset * 2) + 1), optimizedOffset, (unsigned long)((currentOptimizedOffset * 2) + 2), optimizedOffset];
}
// Footer
[shaderString appendString:@"}\n"];
return shaderString;
}
+ (NSString *)fragmentShaderForOptimizedBlurOfRadius:(NSUInteger)blurRadius sigma:(CGFloat)sigma;
{
if (blurRadius < 1)
{
return kGPUImagePassthroughFragmentShaderString;
}
NSUInteger numberOfOptimizedOffsets = MIN(blurRadius / 2 + (blurRadius % 2), 7);
NSUInteger trueNumberOfOptimizedOffsets = blurRadius / 2 + (blurRadius % 2);
NSMutableString *shaderString = [[NSMutableString alloc] init];
// Header
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
[shaderString appendFormat:@"\
uniform sampler2D inputImageTexture;\n\
uniform highp float texelWidthOffset;\n\
uniform highp float texelHeightOffset;\n\
\n\
varying highp vec2 blurCoordinates[%lu];\n\
\n\
void main()\n\
{\n\
lowp vec4 sum = vec4(0.0);\n", (unsigned long)(1 + (numberOfOptimizedOffsets * 2)) ];
#else
[shaderString appendFormat:@"\
uniform sampler2D inputImageTexture;\n\
uniform float texelWidthOffset;\n\
uniform float texelHeightOffset;\n\
\n\
varying vec2 blurCoordinates[%lu];\n\
\n\
void main()\n\
{\n\
vec4 sum = vec4(0.0);\n", 1 + (numberOfOptimizedOffsets * 2) ];
#endif
GLfloat boxWeight = 1.0 / (GLfloat)((blurRadius * 2) + 1);
// Inner texture loop
[shaderString appendFormat:@"sum += texture2D(inputImageTexture, blurCoordinates[0]) * %f;\n", boxWeight];
for (NSUInteger currentBlurCoordinateIndex = 0; currentBlurCoordinateIndex < numberOfOptimizedOffsets; currentBlurCoordinateIndex++)
{
[shaderString appendFormat:@"sum += texture2D(inputImageTexture, blurCoordinates[%lu]) * %f;\n", (unsigned long)((currentBlurCoordinateIndex * 2) + 1), boxWeight * 2.0];
[shaderString appendFormat:@"sum += texture2D(inputImageTexture, blurCoordinates[%lu]) * %f;\n", (unsigned long)((currentBlurCoordinateIndex * 2) + 2), boxWeight * 2.0];
}
// If the number of required samples exceeds the amount we can pass in via varyings, we have to do dependent texture reads in the fragment shader
if (trueNumberOfOptimizedOffsets > numberOfOptimizedOffsets)
{
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
[shaderString appendString:@"highp vec2 singleStepOffset = vec2(texelWidthOffset, texelHeightOffset);\n"];
#else
[shaderString appendString:@"vec2 singleStepOffset = vec2(texelWidthOffset, texelHeightOffset);\n"];
#endif
for (NSUInteger currentOverlowTextureRead = numberOfOptimizedOffsets; currentOverlowTextureRead < trueNumberOfOptimizedOffsets; currentOverlowTextureRead++)
{
GLfloat optimizedOffset = (GLfloat)(currentOverlowTextureRead * 2) + 1.5;
[shaderString appendFormat:@"sum += texture2D(inputImageTexture, blurCoordinates[0] + singleStepOffset * %f) * %f;\n", optimizedOffset, boxWeight * 2.0];
[shaderString appendFormat:@"sum += texture2D(inputImageTexture, blurCoordinates[0] - singleStepOffset * %f) * %f;\n", optimizedOffset, boxWeight * 2.0];
}
}
// Footer
[shaderString appendString:@"\
gl_FragColor = sum;\n\
}\n"];
return shaderString;
}
- (void)setupFilterForSize:(CGSize)filterFrameSize;
{
[super setupFilterForSize:filterFrameSize];
if (shouldResizeBlurRadiusWithImageSize == YES)
{
}
}
#pragma mark -
#pragma mark Initialization and teardown
- (id)init;
{
// NSString *currentGaussianBlurVertexShader = [GPUImageGaussianBlurFilter vertexShaderForStandardGaussianOfRadius:4 sigma:2.0];
// NSString *currentGaussianBlurFragmentShader = [GPUImageGaussianBlurFilter fragmentShaderForStandardGaussianOfRadius:4 sigma:2.0];
NSString *currentBoxBlurVertexShader = [[self class] vertexShaderForOptimizedBlurOfRadius:4 sigma:0.0];
NSString *currentBoxBlurFragmentShader = [[self class] fragmentShaderForOptimizedBlurOfRadius:4 sigma:0.0];
if (!(self = [super initWithFirstStageVertexShaderFromString:currentBoxBlurVertexShader firstStageFragmentShaderFromString:currentBoxBlurFragmentShader secondStageVertexShaderFromString:currentBoxBlurVertexShader secondStageFragmentShaderFromString:currentBoxBlurFragmentShader]))
{
return nil;
}
_blurRadiusInPixels = 4.0;
return self;
}
#pragma mark -
#pragma mark Accessors
- (void)setBlurRadiusInPixels:(CGFloat)newValue;
{
CGFloat newBlurRadius = round(round(newValue / 2.0) * 2.0); // For now, only do even radii
if (newBlurRadius != _blurRadiusInPixels)
{
_blurRadiusInPixels = newBlurRadius;
NSString *newGaussianBlurVertexShader = [[self class] vertexShaderForOptimizedBlurOfRadius:_blurRadiusInPixels sigma:0.0];
NSString *newGaussianBlurFragmentShader = [[self class] fragmentShaderForOptimizedBlurOfRadius:_blurRadiusInPixels sigma:0.0];
// NSLog(@"Optimized vertex shader: \n%@", newGaussianBlurVertexShader);
// NSLog(@"Optimized fragment shader: \n%@", newGaussianBlurFragmentShader);
//
[self switchToVertexShader:newGaussianBlurVertexShader fragmentShader:newGaussianBlurFragmentShader];
}
shouldResizeBlurRadiusWithImageSize = NO;
}
@end

View File

@ -0,0 +1,11 @@
#import "GPUImageFilter.h"
@interface GPUImageBrightnessFilter : GPUImageFilter
{
GLint brightnessUniform;
}
// Brightness ranges from -1.0 to 1.0, with 0.0 as the normal level
@property(readwrite, nonatomic) CGFloat brightness;
@end

View File

@ -0,0 +1,66 @@
#import "GPUImageBrightnessFilter.h"
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUImageBrightnessFragmentShaderString = SHADER_STRING
(
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform lowp float brightness;
void main()
{
lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
gl_FragColor = vec4((textureColor.rgb + vec3(brightness)), textureColor.w);
}
);
#else
NSString *const kGPUImageBrightnessFragmentShaderString = SHADER_STRING
(
varying vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform float brightness;
void main()
{
vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
gl_FragColor = vec4((textureColor.rgb + vec3(brightness)), textureColor.w);
}
);
#endif
@implementation GPUImageBrightnessFilter
@synthesize brightness = _brightness;
#pragma mark -
#pragma mark Initialization and teardown
- (id)init;
{
if (!(self = [super initWithFragmentShaderFromString:kGPUImageBrightnessFragmentShaderString]))
{
return nil;
}
brightnessUniform = [filterProgram uniformIndex:@"brightness"];
self.brightness = 0.0;
return self;
}
#pragma mark -
#pragma mark Accessors
- (void)setBrightness:(CGFloat)newValue;
{
_brightness = newValue;
[self setFloat:_brightness forUniform:brightnessUniform program:filterProgram];
}
@end

View File

@ -0,0 +1,10 @@
#import "GPUImageFilter.h"
@interface GPUImageBuffer : GPUImageFilter
{
NSMutableArray *bufferedFramebuffers;
}
@property(readwrite, nonatomic) NSUInteger bufferSize;
@end

View File

@ -0,0 +1,112 @@
#import "GPUImageBuffer.h"
@interface GPUImageBuffer()
@end
@implementation GPUImageBuffer
@synthesize bufferSize = _bufferSize;
#pragma mark -
#pragma mark Initialization and teardown
- (id)init;
{
if (!(self = [self initWithFragmentShaderFromString:kGPUImagePassthroughFragmentShaderString]))
{
return nil;
}
bufferedFramebuffers = [[NSMutableArray alloc] init];
// [bufferedTextures addObject:[NSNumber numberWithInt:outputTexture]];
_bufferSize = 1;
return self;
}
- (void)dealloc
{
for (GPUImageFramebuffer *currentFramebuffer in bufferedFramebuffers)
{
[currentFramebuffer unlock];
}
}
#pragma mark -
#pragma mark GPUImageInput
- (void)newFrameReadyAtTime:(CMTime)frameTime atIndex:(NSInteger)textureIndex;
{
if ([bufferedFramebuffers count] >= _bufferSize)
{
outputFramebuffer = [bufferedFramebuffers objectAtIndex:0];
[bufferedFramebuffers removeObjectAtIndex:0];
}
else
{
// Nothing yet in the buffer, so don't process further until the buffer is full
outputFramebuffer = firstInputFramebuffer;
[firstInputFramebuffer lock];
}
[bufferedFramebuffers addObject:firstInputFramebuffer];
// Need to pass along rotation information, as we're just holding on to buffered framebuffers and not rotating them ourselves
for (id<GPUImageInput> currentTarget in targets)
{
if (currentTarget != self.targetToIgnoreForUpdates)
{
NSInteger indexOfObject = [targets indexOfObject:currentTarget];
NSInteger textureIndex = [[targetTextureIndices objectAtIndex:indexOfObject] integerValue];
[currentTarget setInputRotation:inputRotation atIndex:textureIndex];
}
}
// Let the downstream video elements see the previous frame from the buffer before rendering a new one into place
[self informTargetsAboutNewFrameAtTime:frameTime];
// [self renderToTextureWithVertices:imageVertices textureCoordinates:[[self class] textureCoordinatesForRotation:inputRotation]];
}
- (void)renderToTextureWithVertices:(const GLfloat *)vertices textureCoordinates:(const GLfloat *)textureCoordinates;
{
// No need to render to another texture anymore, since we'll be hanging on to the textures in our buffer
}
#pragma mark -
#pragma mark Accessors
- (void)setBufferSize:(NSUInteger)newValue;
{
if ( (newValue == _bufferSize) || (newValue < 1) )
{
return;
}
if (newValue > _bufferSize)
{
NSUInteger texturesToAdd = newValue - _bufferSize;
for (NSUInteger currentTextureIndex = 0; currentTextureIndex < texturesToAdd; currentTextureIndex++)
{
// TODO: Deal with the growth of the size of the buffer by rotating framebuffers, no textures
}
}
else
{
NSUInteger texturesToRemove = _bufferSize - newValue;
for (NSUInteger currentTextureIndex = 0; currentTextureIndex < texturesToRemove; currentTextureIndex++)
{
GPUImageFramebuffer *lastFramebuffer = [bufferedFramebuffers lastObject];
[bufferedFramebuffers removeObjectAtIndex:([bufferedFramebuffers count] - 1)];
[lastFramebuffer unlock];
lastFramebuffer = nil;
}
}
_bufferSize = newValue;
}
@end

View File

@ -0,0 +1,16 @@
#import "GPUImageFilter.h"
/// Creates a bulge distortion on the image
@interface GPUImageBulgeDistortionFilter : GPUImageFilter
{
GLint aspectRatioUniform, radiusUniform, centerUniform, scaleUniform;
}
/// The center about which to apply the distortion, with a default of (0.5, 0.5)
@property(readwrite, nonatomic) CGPoint center;
/// The radius of the distortion, ranging from 0.0 to 1.0, with a default of 0.25
@property(readwrite, nonatomic) CGFloat radius;
/// The amount of distortion to apply, from -1.0 to 1.0, with a default of 0.5
@property(readwrite, nonatomic) CGFloat scale;
@end

View File

@ -0,0 +1,174 @@
#import "GPUImageBulgeDistortionFilter.h"
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUImageBulgeDistortionFragmentShaderString = SHADER_STRING
(
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform highp float aspectRatio;
uniform highp vec2 center;
uniform highp float radius;
uniform highp float scale;
void main()
{
highp vec2 textureCoordinateToUse = vec2(textureCoordinate.x, (textureCoordinate.y * aspectRatio + 0.5 - 0.5 * aspectRatio));
highp float dist = distance(center, textureCoordinateToUse);
textureCoordinateToUse = textureCoordinate;
if (dist < radius)
{
textureCoordinateToUse -= center;
highp float percent = 1.0 - ((radius - dist) / radius) * scale;
percent = percent * percent;
textureCoordinateToUse = textureCoordinateToUse * percent;
textureCoordinateToUse += center;
}
gl_FragColor = texture2D(inputImageTexture, textureCoordinateToUse );
}
);
#else
NSString *const kGPUImageBulgeDistortionFragmentShaderString = SHADER_STRING
(
varying vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform float aspectRatio;
uniform vec2 center;
uniform float radius;
uniform float scale;
void main()
{
vec2 textureCoordinateToUse = vec2(textureCoordinate.x, (textureCoordinate.y * aspectRatio + 0.5 - 0.5 * aspectRatio));
float dist = distance(center, textureCoordinateToUse);
textureCoordinateToUse = textureCoordinate;
if (dist < radius)
{
textureCoordinateToUse -= center;
float percent = 1.0 - ((radius - dist) / radius) * scale;
percent = percent * percent;
textureCoordinateToUse = textureCoordinateToUse * percent;
textureCoordinateToUse += center;
}
gl_FragColor = texture2D(inputImageTexture, textureCoordinateToUse );
}
);
#endif
@interface GPUImageBulgeDistortionFilter ()
- (void)adjustAspectRatio;
@property (readwrite, nonatomic) CGFloat aspectRatio;
@end
@implementation GPUImageBulgeDistortionFilter
@synthesize aspectRatio = _aspectRatio;
@synthesize center = _center;
@synthesize radius = _radius;
@synthesize scale = _scale;
#pragma mark -
#pragma mark Initialization and teardown
- (id)init;
{
if (!(self = [super initWithFragmentShaderFromString:kGPUImageBulgeDistortionFragmentShaderString]))
{
return nil;
}
aspectRatioUniform = [filterProgram uniformIndex:@"aspectRatio"];
radiusUniform = [filterProgram uniformIndex:@"radius"];
scaleUniform = [filterProgram uniformIndex:@"scale"];
centerUniform = [filterProgram uniformIndex:@"center"];
self.radius = 0.25;
self.scale = 0.5;
self.center = CGPointMake(0.5, 0.5);
return self;
}
#pragma mark -
#pragma mark Accessors
- (void)adjustAspectRatio;
{
if (GPUImageRotationSwapsWidthAndHeight(inputRotation))
{
[self setAspectRatio:(inputTextureSize.width / inputTextureSize.height)];
}
else
{
[self setAspectRatio:(inputTextureSize.height / inputTextureSize.width)];
}
}
- (void)forceProcessingAtSize:(CGSize)frameSize;
{
[super forceProcessingAtSize:frameSize];
[self adjustAspectRatio];
}
- (void)setInputSize:(CGSize)newSize atIndex:(NSInteger)textureIndex;
{
CGSize oldInputSize = inputTextureSize;
[super setInputSize:newSize atIndex:textureIndex];
if ( (!CGSizeEqualToSize(oldInputSize, inputTextureSize)) && (!CGSizeEqualToSize(newSize, CGSizeZero)) )
{
[self adjustAspectRatio];
}
}
- (void)setAspectRatio:(CGFloat)newValue;
{
_aspectRatio = newValue;
[self setFloat:_aspectRatio forUniform:aspectRatioUniform program:filterProgram];
}
- (void)setInputRotation:(GPUImageRotationMode)newInputRotation atIndex:(NSInteger)textureIndex;
{
[super setInputRotation:newInputRotation atIndex:textureIndex];
[self setCenter:self.center];
[self adjustAspectRatio];
}
- (void)setRadius:(CGFloat)newValue;
{
_radius = newValue;
[self setFloat:_radius forUniform:radiusUniform program:filterProgram];
}
- (void)setScale:(CGFloat)newValue;
{
_scale = newValue;
[self setFloat:_scale forUniform:scaleUniform program:filterProgram];
}
- (void)setCenter:(CGPoint)newValue;
{
_center = newValue;
CGPoint rotatedPoint = [self rotatedPoint:_center forRotation:inputRotation];
[self setPoint:rotatedPoint forUniform:centerUniform program:filterProgram];
}
@end

View File

@ -0,0 +1,5 @@
#import "GPUImageFilter.h"
@interface GPUImageCGAColorspaceFilter : GPUImageFilter
@end

View File

@ -0,0 +1,113 @@
//
// GPUImageCGAColorspaceFilter.m
//
#import "GPUImageCGAColorspaceFilter.h"
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUImageCGAColorspaceFragmentShaderString = SHADER_STRING
(
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
highp vec2 sampleDivisor = vec2(1.0 / 200.0, 1.0 / 320.0);
//highp vec4 colorDivisor = vec4(colorDepth);
highp vec2 samplePos = textureCoordinate - mod(textureCoordinate, sampleDivisor);
highp vec4 color = texture2D(inputImageTexture, samplePos );
//gl_FragColor = texture2D(inputImageTexture, samplePos );
mediump vec4 colorCyan = vec4(85.0 / 255.0, 1.0, 1.0, 1.0);
mediump vec4 colorMagenta = vec4(1.0, 85.0 / 255.0, 1.0, 1.0);
mediump vec4 colorWhite = vec4(1.0, 1.0, 1.0, 1.0);
mediump vec4 colorBlack = vec4(0.0, 0.0, 0.0, 1.0);
mediump vec4 endColor;
highp float blackDistance = distance(color, colorBlack);
highp float whiteDistance = distance(color, colorWhite);
highp float magentaDistance = distance(color, colorMagenta);
highp float cyanDistance = distance(color, colorCyan);
mediump vec4 finalColor;
highp float colorDistance = min(magentaDistance, cyanDistance);
colorDistance = min(colorDistance, whiteDistance);
colorDistance = min(colorDistance, blackDistance);
if (colorDistance == blackDistance) {
finalColor = colorBlack;
} else if (colorDistance == whiteDistance) {
finalColor = colorWhite;
} else if (colorDistance == cyanDistance) {
finalColor = colorCyan;
} else {
finalColor = colorMagenta;
}
gl_FragColor = finalColor;
}
);
#else
NSString *const kGPUImageCGAColorspaceFragmentShaderString = SHADER_STRING
(
varying vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
vec2 sampleDivisor = vec2(1.0 / 200.0, 1.0 / 320.0);
//highp vec4 colorDivisor = vec4(colorDepth);
vec2 samplePos = textureCoordinate - mod(textureCoordinate, sampleDivisor);
vec4 color = texture2D(inputImageTexture, samplePos );
//gl_FragColor = texture2D(inputImageTexture, samplePos );
vec4 colorCyan = vec4(85.0 / 255.0, 1.0, 1.0, 1.0);
vec4 colorMagenta = vec4(1.0, 85.0 / 255.0, 1.0, 1.0);
vec4 colorWhite = vec4(1.0, 1.0, 1.0, 1.0);
vec4 colorBlack = vec4(0.0, 0.0, 0.0, 1.0);
vec4 endColor;
float blackDistance = distance(color, colorBlack);
float whiteDistance = distance(color, colorWhite);
float magentaDistance = distance(color, colorMagenta);
float cyanDistance = distance(color, colorCyan);
vec4 finalColor;
float colorDistance = min(magentaDistance, cyanDistance);
colorDistance = min(colorDistance, whiteDistance);
colorDistance = min(colorDistance, blackDistance);
if (colorDistance == blackDistance) {
finalColor = colorBlack;
} else if (colorDistance == whiteDistance) {
finalColor = colorWhite;
} else if (colorDistance == cyanDistance) {
finalColor = colorCyan;
} else {
finalColor = colorMagenta;
}
gl_FragColor = finalColor;
}
);
#endif
@implementation GPUImageCGAColorspaceFilter
- (id)init;
{
if (!(self = [super initWithFragmentShaderFromString:kGPUImageCGAColorspaceFragmentShaderString]))
{
return nil;
}
return self;
}
@end

View File

@ -0,0 +1,62 @@
#import "GPUImageFilterGroup.h"
@class GPUImageGrayscaleFilter;
@class GPUImageSingleComponentGaussianBlurFilter;
@class GPUImageDirectionalSobelEdgeDetectionFilter;
@class GPUImageDirectionalNonMaximumSuppressionFilter;
@class GPUImageWeakPixelInclusionFilter;
/** This applies the edge detection process described by John Canny in
Canny, J., A Computational Approach To Edge Detection, IEEE Trans. Pattern Analysis and Machine Intelligence, 8(6):679698, 1986.
and implemented in OpenGL ES by
A. Ensor, S. Hall. GPU-based Image Analysis on Mobile Devices. Proceedings of Image and Vision Computing New Zealand 2011.
It starts with a conversion to luminance, followed by an accelerated 9-hit Gaussian blur. A Sobel operator is applied to obtain the overall
gradient strength in the blurred image, as well as the direction (in texture sampling steps) of the gradient. A non-maximum suppression filter
acts along the direction of the gradient, highlighting strong edges that pass the threshold and completely removing those that fail the lower
threshold. Finally, pixels from in-between these thresholds are either included in edges or rejected based on neighboring pixels.
*/
@interface GPUImageCannyEdgeDetectionFilter : GPUImageFilterGroup
{
GPUImageGrayscaleFilter *luminanceFilter;
GPUImageSingleComponentGaussianBlurFilter *blurFilter;
GPUImageDirectionalSobelEdgeDetectionFilter *edgeDetectionFilter;
GPUImageDirectionalNonMaximumSuppressionFilter *nonMaximumSuppressionFilter;
GPUImageWeakPixelInclusionFilter *weakPixelInclusionFilter;
}
/** The image width and height factors tweak the appearance of the edges.
These parameters affect the visibility of the detected edges
By default, they match the inverse of the filter size in pixels
*/
@property(readwrite, nonatomic) CGFloat texelWidth;
/** The image width and height factors tweak the appearance of the edges.
These parameters affect the visibility of the detected edges
By default, they match the inverse of the filter size in pixels
*/
@property(readwrite, nonatomic) CGFloat texelHeight;
/** The underlying blur radius for the Gaussian blur. Default is 2.0.
*/
@property (readwrite, nonatomic) CGFloat blurRadiusInPixels;
/** The underlying blur texel spacing multiplier. Default is 1.0.
*/
@property (readwrite, nonatomic) CGFloat blurTexelSpacingMultiplier;
/** Any edge with a gradient magnitude above this threshold will pass and show up in the final result.
*/
@property(readwrite, nonatomic) CGFloat upperThreshold;
/** Any edge with a gradient magnitude below this threshold will fail and be removed from the final result.
*/
@property(readwrite, nonatomic) CGFloat lowerThreshold;
@end

View File

@ -0,0 +1,125 @@
#import "GPUImageCannyEdgeDetectionFilter.h"
#import "GPUImageGrayscaleFilter.h"
#import "GPUImageDirectionalSobelEdgeDetectionFilter.h"
#import "GPUImageDirectionalNonMaximumSuppressionFilter.h"
#import "GPUImageWeakPixelInclusionFilter.h"
#import "GPUImageSingleComponentGaussianBlurFilter.h"
@implementation GPUImageCannyEdgeDetectionFilter
@synthesize upperThreshold;
@synthesize lowerThreshold;
@synthesize blurRadiusInPixels;
@synthesize blurTexelSpacingMultiplier;
@synthesize texelWidth;
@synthesize texelHeight;
- (id)init;
{
if (!(self = [super init]))
{
return nil;
}
// First pass: convert image to luminance
luminanceFilter = [[GPUImageGrayscaleFilter alloc] init];
[self addFilter:luminanceFilter];
// Second pass: apply a variable Gaussian blur
blurFilter = [[GPUImageSingleComponentGaussianBlurFilter alloc] init];
[self addFilter:blurFilter];
// Third pass: run the Sobel edge detection, with calculated gradient directions, on this blurred image
edgeDetectionFilter = [[GPUImageDirectionalSobelEdgeDetectionFilter alloc] init];
[self addFilter:edgeDetectionFilter];
// Fourth pass: apply non-maximum suppression
nonMaximumSuppressionFilter = [[GPUImageDirectionalNonMaximumSuppressionFilter alloc] init];
[self addFilter:nonMaximumSuppressionFilter];
// Fifth pass: include weak pixels to complete edges
weakPixelInclusionFilter = [[GPUImageWeakPixelInclusionFilter alloc] init];
[self addFilter:weakPixelInclusionFilter];
[luminanceFilter addTarget:blurFilter];
[blurFilter addTarget:edgeDetectionFilter];
[edgeDetectionFilter addTarget:nonMaximumSuppressionFilter];
[nonMaximumSuppressionFilter addTarget:weakPixelInclusionFilter];
self.initialFilters = [NSArray arrayWithObject:luminanceFilter];
// self.terminalFilter = nonMaximumSuppressionFilter;
self.terminalFilter = weakPixelInclusionFilter;
self.blurRadiusInPixels = 2.0;
self.blurTexelSpacingMultiplier = 1.0;
self.upperThreshold = 0.4;
self.lowerThreshold = 0.1;
return self;
}
#pragma mark -
#pragma mark Accessors
- (void)setBlurRadiusInPixels:(CGFloat)newValue;
{
blurFilter.blurRadiusInPixels = newValue;
}
- (CGFloat)blurRadiusInPixels;
{
return blurFilter.blurRadiusInPixels;
}
- (void)setBlurTexelSpacingMultiplier:(CGFloat)newValue;
{
blurFilter.texelSpacingMultiplier = newValue;
}
- (CGFloat)blurTexelSpacingMultiplier;
{
return blurFilter.texelSpacingMultiplier;
}
- (void)setTexelWidth:(CGFloat)newValue;
{
edgeDetectionFilter.texelWidth = newValue;
}
- (CGFloat)texelWidth;
{
return edgeDetectionFilter.texelWidth;
}
- (void)setTexelHeight:(CGFloat)newValue;
{
edgeDetectionFilter.texelHeight = newValue;
}
- (CGFloat)texelHeight;
{
return edgeDetectionFilter.texelHeight;
}
- (void)setUpperThreshold:(CGFloat)newValue;
{
nonMaximumSuppressionFilter.upperThreshold = newValue;
}
- (CGFloat)upperThreshold;
{
return nonMaximumSuppressionFilter.upperThreshold;
}
- (void)setLowerThreshold:(CGFloat)newValue;
{
nonMaximumSuppressionFilter.lowerThreshold = newValue;
}
- (CGFloat)lowerThreshold;
{
return nonMaximumSuppressionFilter.lowerThreshold;
}
@end

View File

@ -0,0 +1,32 @@
#import "GPUImageTwoInputFilter.h"
/** Selectively replaces a color in the first image with the second image
*/
@interface GPUImageChromaKeyBlendFilter : GPUImageTwoInputFilter
{
GLint colorToReplaceUniform, thresholdSensitivityUniform, smoothingUniform;
}
/** The threshold sensitivity controls how similar pixels need to be colored to be replaced
The default value is 0.3
*/
@property(readwrite, nonatomic) CGFloat thresholdSensitivity;
/** The degree of smoothing controls how gradually similar colors are replaced in the image
The default value is 0.1
*/
@property(readwrite, nonatomic) CGFloat smoothing;
/** The color to be replaced is specified using individual red, green, and blue components (normalized to 1.0).
The default is green: (0.0, 1.0, 0.0).
@param redComponent Red component of color to be replaced
@param greenComponent Green component of color to be replaced
@param blueComponent Blue component of color to be replaced
*/
- (void)setColorToReplaceRed:(GLfloat)redComponent green:(GLfloat)greenComponent blue:(GLfloat)blueComponent;
@end

View File

@ -0,0 +1,117 @@
#import "GPUImageChromaKeyBlendFilter.h"
// Shader code based on Apple's CIChromaKeyFilter example: https://developer.apple.com/library/mac/#samplecode/CIChromaKeyFilter/Introduction/Intro.html
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUImageChromaKeyBlendFragmentShaderString = SHADER_STRING
(
precision highp float;
varying highp vec2 textureCoordinate;
varying highp vec2 textureCoordinate2;
uniform float thresholdSensitivity;
uniform float smoothing;
uniform vec3 colorToReplace;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
void main()
{
vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
vec4 textureColor2 = texture2D(inputImageTexture2, textureCoordinate2);
float maskY = 0.2989 * colorToReplace.r + 0.5866 * colorToReplace.g + 0.1145 * colorToReplace.b;
float maskCr = 0.7132 * (colorToReplace.r - maskY);
float maskCb = 0.5647 * (colorToReplace.b - maskY);
float Y = 0.2989 * textureColor.r + 0.5866 * textureColor.g + 0.1145 * textureColor.b;
float Cr = 0.7132 * (textureColor.r - Y);
float Cb = 0.5647 * (textureColor.b - Y);
// float blendValue = 1.0 - smoothstep(thresholdSensitivity - smoothing, thresholdSensitivity , abs(Cr - maskCr) + abs(Cb - maskCb));
float blendValue = 1.0 - smoothstep(thresholdSensitivity, thresholdSensitivity + smoothing, distance(vec2(Cr, Cb), vec2(maskCr, maskCb)));
gl_FragColor = mix(textureColor, textureColor2, blendValue);
}
);
#else
NSString *const kGPUImageChromaKeyBlendFragmentShaderString = SHADER_STRING
(
varying vec2 textureCoordinate;
varying vec2 textureCoordinate2;
uniform float thresholdSensitivity;
uniform float smoothing;
uniform vec3 colorToReplace;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
void main()
{
vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
vec4 textureColor2 = texture2D(inputImageTexture2, textureCoordinate2);
float maskY = 0.2989 * colorToReplace.r + 0.5866 * colorToReplace.g + 0.1145 * colorToReplace.b;
float maskCr = 0.7132 * (colorToReplace.r - maskY);
float maskCb = 0.5647 * (colorToReplace.b - maskY);
float Y = 0.2989 * textureColor.r + 0.5866 * textureColor.g + 0.1145 * textureColor.b;
float Cr = 0.7132 * (textureColor.r - Y);
float Cb = 0.5647 * (textureColor.b - Y);
// float blendValue = 1.0 - smoothstep(thresholdSensitivity - smoothing, thresholdSensitivity , abs(Cr - maskCr) + abs(Cb - maskCb));
float blendValue = 1.0 - smoothstep(thresholdSensitivity, thresholdSensitivity + smoothing, distance(vec2(Cr, Cb), vec2(maskCr, maskCb)));
gl_FragColor = mix(textureColor, textureColor2, blendValue);
}
);
#endif
@implementation GPUImageChromaKeyBlendFilter
@synthesize thresholdSensitivity = _thresholdSensitivity;
@synthesize smoothing = _smoothing;
- (id)init;
{
if (!(self = [super initWithFragmentShaderFromString:kGPUImageChromaKeyBlendFragmentShaderString]))
{
return nil;
}
thresholdSensitivityUniform = [filterProgram uniformIndex:@"thresholdSensitivity"];
smoothingUniform = [filterProgram uniformIndex:@"smoothing"];
colorToReplaceUniform = [filterProgram uniformIndex:@"colorToReplace"];
self.thresholdSensitivity = 0.4;
self.smoothing = 0.1;
[self setColorToReplaceRed:0.0 green:1.0 blue:0.0];
return self;
}
#pragma mark -
#pragma mark Accessors
- (void)setColorToReplaceRed:(GLfloat)redComponent green:(GLfloat)greenComponent blue:(GLfloat)blueComponent;
{
GPUVector3 colorToReplace = {redComponent, greenComponent, blueComponent};
[self setVec3:colorToReplace forUniform:colorToReplaceUniform program:filterProgram];
}
- (void)setThresholdSensitivity:(CGFloat)newValue;
{
_thresholdSensitivity = newValue;
[self setFloat:(GLfloat)_thresholdSensitivity forUniform:thresholdSensitivityUniform program:filterProgram];
}
- (void)setSmoothing:(CGFloat)newValue;
{
_smoothing = newValue;
[self setFloat:(GLfloat)_smoothing forUniform:smoothingUniform program:filterProgram];
}
@end

View File

@ -0,0 +1,30 @@
#import "GPUImageFilter.h"
@interface GPUImageChromaKeyFilter : GPUImageFilter
{
GLint colorToReplaceUniform, thresholdSensitivityUniform, smoothingUniform;
}
/** The threshold sensitivity controls how similar pixels need to be colored to be replaced
The default value is 0.3
*/
@property(readwrite, nonatomic) CGFloat thresholdSensitivity;
/** The degree of smoothing controls how gradually similar colors are replaced in the image
The default value is 0.1
*/
@property(readwrite, nonatomic) CGFloat smoothing;
/** The color to be replaced is specified using individual red, green, and blue components (normalized to 1.0).
The default is green: (0.0, 1.0, 0.0).
@param redComponent Red component of color to be replaced
@param greenComponent Green component of color to be replaced
@param blueComponent Blue component of color to be replaced
*/
- (void)setColorToReplaceRed:(GLfloat)redComponent green:(GLfloat)greenComponent blue:(GLfloat)blueComponent;
@end

View File

@ -0,0 +1,113 @@
#import "GPUImageChromaKeyFilter.h"
// Shader code based on Apple's CIChromaKeyFilter example: https://developer.apple.com/library/mac/#samplecode/CIChromaKeyFilter/Introduction/Intro.html
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUImageChromaKeyFragmentShaderString = SHADER_STRING
(
precision highp float;
varying highp vec2 textureCoordinate;
uniform float thresholdSensitivity;
uniform float smoothing;
uniform vec3 colorToReplace;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
void main()
{
vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
float maskY = 0.2989 * colorToReplace.r + 0.5866 * colorToReplace.g + 0.1145 * colorToReplace.b;
float maskCr = 0.7132 * (colorToReplace.r - maskY);
float maskCb = 0.5647 * (colorToReplace.b - maskY);
float Y = 0.2989 * textureColor.r + 0.5866 * textureColor.g + 0.1145 * textureColor.b;
float Cr = 0.7132 * (textureColor.r - Y);
float Cb = 0.5647 * (textureColor.b - Y);
// float blendValue = 1.0 - smoothstep(thresholdSensitivity - smoothing, thresholdSensitivity , abs(Cr - maskCr) + abs(Cb - maskCb));
float blendValue = smoothstep(thresholdSensitivity, thresholdSensitivity + smoothing, distance(vec2(Cr, Cb), vec2(maskCr, maskCb)));
gl_FragColor = vec4(textureColor.rgb, textureColor.a * blendValue);
}
);
#else
NSString *const kGPUImageChromaKeyFragmentShaderString = SHADER_STRING
(
varying vec2 textureCoordinate;
uniform float thresholdSensitivity;
uniform float smoothing;
uniform vec3 colorToReplace;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
void main()
{
vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
float maskY = 0.2989 * colorToReplace.r + 0.5866 * colorToReplace.g + 0.1145 * colorToReplace.b;
float maskCr = 0.7132 * (colorToReplace.r - maskY);
float maskCb = 0.5647 * (colorToReplace.b - maskY);
float Y = 0.2989 * textureColor.r + 0.5866 * textureColor.g + 0.1145 * textureColor.b;
float Cr = 0.7132 * (textureColor.r - Y);
float Cb = 0.5647 * (textureColor.b - Y);
// float blendValue = 1.0 - smoothstep(thresholdSensitivity - smoothing, thresholdSensitivity , abs(Cr - maskCr) + abs(Cb - maskCb));
float blendValue = smoothstep(thresholdSensitivity, thresholdSensitivity + smoothing, distance(vec2(Cr, Cb), vec2(maskCr, maskCb)));
gl_FragColor = vec4(textureColor.rgb, textureColor.a * blendValue);
}
);
#endif
@implementation GPUImageChromaKeyFilter
@synthesize thresholdSensitivity = _thresholdSensitivity;
@synthesize smoothing = _smoothing;
- (id)init;
{
if (!(self = [super initWithFragmentShaderFromString:kGPUImageChromaKeyFragmentShaderString]))
{
return nil;
}
thresholdSensitivityUniform = [filterProgram uniformIndex:@"thresholdSensitivity"];
smoothingUniform = [filterProgram uniformIndex:@"smoothing"];
colorToReplaceUniform = [filterProgram uniformIndex:@"colorToReplace"];
self.thresholdSensitivity = 0.4;
self.smoothing = 0.1;
[self setColorToReplaceRed:0.0 green:1.0 blue:0.0];
return self;
}
#pragma mark -
#pragma mark Accessors
- (void)setColorToReplaceRed:(GLfloat)redComponent green:(GLfloat)greenComponent blue:(GLfloat)blueComponent;
{
GPUVector3 colorToReplace = {redComponent, greenComponent, blueComponent};
[self setVec3:colorToReplace forUniform:colorToReplaceUniform program:filterProgram];
}
- (void)setThresholdSensitivity:(CGFloat)newValue;
{
_thresholdSensitivity = newValue;
[self setFloat:(GLfloat)_thresholdSensitivity forUniform:thresholdSensitivityUniform program:filterProgram];
}
- (void)setSmoothing:(CGFloat)newValue;
{
_smoothing = newValue;
[self setFloat:(GLfloat)_smoothing forUniform:smoothingUniform program:filterProgram];
}
@end

View File

@ -0,0 +1,19 @@
#import "GPUImageFilterGroup.h"
@class GPUImageErosionFilter;
@class GPUImageDilationFilter;
// A filter that first performs a dilation on the red channel of an image, followed by an erosion of the same radius.
// This helps to filter out smaller dark elements.
@interface GPUImageClosingFilter : GPUImageFilterGroup
{
GPUImageErosionFilter *erosionFilter;
GPUImageDilationFilter *dilationFilter;
}
@property(readwrite, nonatomic) CGFloat verticalTexelSpacing, horizontalTexelSpacing;
- (id)initWithRadius:(NSUInteger)radius;
@end

View File

@ -0,0 +1,57 @@
#import "GPUImageClosingFilter.h"
#import "GPUImageErosionFilter.h"
#import "GPUImageDilationFilter.h"
@implementation GPUImageClosingFilter
@synthesize verticalTexelSpacing = _verticalTexelSpacing;
@synthesize horizontalTexelSpacing = _horizontalTexelSpacing;
- (id)init;
{
if (!(self = [self initWithRadius:1]))
{
return nil;
}
return self;
}
- (id)initWithRadius:(NSUInteger)radius;
{
if (!(self = [super init]))
{
return nil;
}
// First pass: dilation
dilationFilter = [[GPUImageDilationFilter alloc] initWithRadius:radius];
[self addFilter:dilationFilter];
// Second pass: erosion
erosionFilter = [[GPUImageErosionFilter alloc] initWithRadius:radius];
[self addFilter:erosionFilter];
[dilationFilter addTarget:erosionFilter];
self.initialFilters = [NSArray arrayWithObjects:dilationFilter, nil];
self.terminalFilter = erosionFilter;
return self;
}
- (void)setVerticalTexelSpacing:(CGFloat)newValue;
{
_verticalTexelSpacing = newValue;
erosionFilter.verticalTexelSpacing = newValue;
dilationFilter.verticalTexelSpacing = newValue;
}
- (void)setHorizontalTexelSpacing:(CGFloat)newValue;
{
_horizontalTexelSpacing = newValue;
erosionFilter.horizontalTexelSpacing = newValue;
dilationFilter.horizontalTexelSpacing = newValue;
}
@end

View File

@ -0,0 +1,5 @@
#import "GPUImageTwoInputFilter.h"
@interface GPUImageColorBlendFilter : GPUImageTwoInputFilter
@end

View File

@ -0,0 +1,113 @@
#import "GPUImageColorBlendFilter.h"
/**
* Color blend mode based upon pseudo code from the PDF specification.
*/
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUImageColorBlendFragmentShaderString = SHADER_STRING
(
varying highp vec2 textureCoordinate;
varying highp vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
highp float lum(lowp vec3 c) {
return dot(c, vec3(0.3, 0.59, 0.11));
}
lowp vec3 clipcolor(lowp vec3 c) {
highp float l = lum(c);
lowp float n = min(min(c.r, c.g), c.b);
lowp float x = max(max(c.r, c.g), c.b);
if (n < 0.0) {
c.r = l + ((c.r - l) * l) / (l - n);
c.g = l + ((c.g - l) * l) / (l - n);
c.b = l + ((c.b - l) * l) / (l - n);
}
if (x > 1.0) {
c.r = l + ((c.r - l) * (1.0 - l)) / (x - l);
c.g = l + ((c.g - l) * (1.0 - l)) / (x - l);
c.b = l + ((c.b - l) * (1.0 - l)) / (x - l);
}
return c;
}
lowp vec3 setlum(lowp vec3 c, highp float l) {
highp float d = l - lum(c);
c = c + vec3(d);
return clipcolor(c);
}
void main()
{
highp vec4 baseColor = texture2D(inputImageTexture, textureCoordinate);
highp vec4 overlayColor = texture2D(inputImageTexture2, textureCoordinate2);
gl_FragColor = vec4(baseColor.rgb * (1.0 - overlayColor.a) + setlum(overlayColor.rgb, lum(baseColor.rgb)) * overlayColor.a, baseColor.a);
}
);
#else
NSString *const kGPUImageColorBlendFragmentShaderString = SHADER_STRING
(
varying vec2 textureCoordinate;
varying vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
float lum(vec3 c) {
return dot(c, vec3(0.3, 0.59, 0.11));
}
vec3 clipcolor(vec3 c) {
float l = lum(c);
float n = min(min(c.r, c.g), c.b);
float x = max(max(c.r, c.g), c.b);
if (n < 0.0) {
c.r = l + ((c.r - l) * l) / (l - n);
c.g = l + ((c.g - l) * l) / (l - n);
c.b = l + ((c.b - l) * l) / (l - n);
}
if (x > 1.0) {
c.r = l + ((c.r - l) * (1.0 - l)) / (x - l);
c.g = l + ((c.g - l) * (1.0 - l)) / (x - l);
c.b = l + ((c.b - l) * (1.0 - l)) / (x - l);
}
return c;
}
vec3 setlum(vec3 c, float l) {
float d = l - lum(c);
c = c + vec3(d);
return clipcolor(c);
}
void main()
{
vec4 baseColor = texture2D(inputImageTexture, textureCoordinate);
vec4 overlayColor = texture2D(inputImageTexture2, textureCoordinate2);
gl_FragColor = vec4(baseColor.rgb * (1.0 - overlayColor.a) + setlum(overlayColor.rgb, lum(baseColor.rgb)) * overlayColor.a, baseColor.a);
}
);
#endif
@implementation GPUImageColorBlendFilter
- (id)init;
{
if (!(self = [super initWithFragmentShaderFromString:kGPUImageColorBlendFragmentShaderString]))
{
return nil;
}
return self;
}
@end

View File

@ -0,0 +1,9 @@
#import "GPUImageTwoInputFilter.h"
/** Applies a color burn blend of two images
*/
@interface GPUImageColorBurnBlendFilter : GPUImageTwoInputFilter
{
}
@end

View File

@ -0,0 +1,52 @@
#import "GPUImageColorBurnBlendFilter.h"
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUImageColorBurnBlendFragmentShaderString = SHADER_STRING
(
varying highp vec2 textureCoordinate;
varying highp vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
void main()
{
mediump vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
mediump vec4 textureColor2 = texture2D(inputImageTexture2, textureCoordinate2);
mediump vec4 whiteColor = vec4(1.0);
gl_FragColor = whiteColor - (whiteColor - textureColor) / textureColor2;
}
);
#else
NSString *const kGPUImageColorBurnBlendFragmentShaderString = SHADER_STRING
(
varying vec2 textureCoordinate;
varying vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
void main()
{
vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
vec4 textureColor2 = texture2D(inputImageTexture2, textureCoordinate2);
vec4 whiteColor = vec4(1.0);
gl_FragColor = whiteColor - (whiteColor - textureColor) / textureColor2;
}
);
#endif
@implementation GPUImageColorBurnBlendFilter
- (id)init;
{
if (!(self = [super initWithFragmentShaderFromString:kGPUImageColorBurnBlendFragmentShaderString]))
{
return nil;
}
return self;
}
@end

View File

@ -0,0 +1,9 @@
#import "GPUImageTwoInputFilter.h"
/** Applies a color dodge blend of two images
*/
@interface GPUImageColorDodgeBlendFilter : GPUImageTwoInputFilter
{
}
@end

View File

@ -0,0 +1,75 @@
#import "GPUImageColorDodgeBlendFilter.h"
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUImageColorDodgeBlendFragmentShaderString = SHADER_STRING
(
precision mediump float;
varying highp vec2 textureCoordinate;
varying highp vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
void main()
{
vec4 base = texture2D(inputImageTexture, textureCoordinate);
vec4 overlay = texture2D(inputImageTexture2, textureCoordinate2);
vec3 baseOverlayAlphaProduct = vec3(overlay.a * base.a);
vec3 rightHandProduct = overlay.rgb * (1.0 - base.a) + base.rgb * (1.0 - overlay.a);
vec3 firstBlendColor = baseOverlayAlphaProduct + rightHandProduct;
vec3 overlayRGB = clamp((overlay.rgb / clamp(overlay.a, 0.01, 1.0)) * step(0.0, overlay.a), 0.0, 0.99);
vec3 secondBlendColor = (base.rgb * overlay.a) / (1.0 - overlayRGB) + rightHandProduct;
vec3 colorChoice = step((overlay.rgb * base.a + base.rgb * overlay.a), baseOverlayAlphaProduct);
gl_FragColor = vec4(mix(firstBlendColor, secondBlendColor, colorChoice), 1.0);
}
);
#else
NSString *const kGPUImageColorDodgeBlendFragmentShaderString = SHADER_STRING
(
varying vec2 textureCoordinate;
varying vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
void main()
{
vec4 base = texture2D(inputImageTexture, textureCoordinate);
vec4 overlay = texture2D(inputImageTexture2, textureCoordinate2);
vec3 baseOverlayAlphaProduct = vec3(overlay.a * base.a);
vec3 rightHandProduct = overlay.rgb * (1.0 - base.a) + base.rgb * (1.0 - overlay.a);
vec3 firstBlendColor = baseOverlayAlphaProduct + rightHandProduct;
vec3 overlayRGB = clamp((overlay.rgb / clamp(overlay.a, 0.01, 1.0)) * step(0.0, overlay.a), 0.0, 0.99);
vec3 secondBlendColor = (base.rgb * overlay.a) / (1.0 - overlayRGB) + rightHandProduct;
vec3 colorChoice = step((overlay.rgb * base.a + base.rgb * overlay.a), baseOverlayAlphaProduct);
gl_FragColor = vec4(mix(firstBlendColor, secondBlendColor, colorChoice), 1.0);
}
);
#endif
@implementation GPUImageColorDodgeBlendFilter
- (id)init;
{
if (!(self = [super initWithFragmentShaderFromString:kGPUImageColorDodgeBlendFragmentShaderString]))
{
return nil;
}
return self;
}
@end

View File

@ -0,0 +1,7 @@
#import "GPUImageFilter.h"
@interface GPUImageColorInvertFilter : GPUImageFilter
{
}
@end

View File

@ -0,0 +1,46 @@
#import "GPUImageColorInvertFilter.h"
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUImageInvertFragmentShaderString = SHADER_STRING
(
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
gl_FragColor = vec4((1.0 - textureColor.rgb), textureColor.w);
}
);
#else
NSString *const kGPUImageInvertFragmentShaderString = SHADER_STRING
(
varying vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
gl_FragColor = vec4((1.0 - textureColor.rgb), textureColor.w);
}
);
#endif
@implementation GPUImageColorInvertFilter
- (id)init;
{
if (!(self = [super initWithFragmentShaderFromString:kGPUImageInvertFragmentShaderString]))
{
return nil;
}
return self;
}
@end

View File

@ -0,0 +1,19 @@
#import "GPUImageFilter.h"
/** Transforms the colors of an image by applying a matrix to them
*/
@interface GPUImageColorMatrixFilter : GPUImageFilter
{
GLint colorMatrixUniform;
GLint intensityUniform;
}
/** A 4x4 matrix used to transform each color in an image
*/
@property(readwrite, nonatomic) GPUMatrix4x4 colorMatrix;
/** The degree to which the new transformed color replaces the original color for each pixel
*/
@property(readwrite, nonatomic) CGFloat intensity;
@end

View File

@ -0,0 +1,87 @@
#import "GPUImageColorMatrixFilter.h"
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUImageColorMatrixFragmentShaderString = SHADER_STRING
(
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform lowp mat4 colorMatrix;
uniform lowp float intensity;
void main()
{
lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
lowp vec4 outputColor = textureColor * colorMatrix;
gl_FragColor = (intensity * outputColor) + ((1.0 - intensity) * textureColor);
}
);
#else
NSString *const kGPUImageColorMatrixFragmentShaderString = SHADER_STRING
(
varying vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform mat4 colorMatrix;
uniform float intensity;
void main()
{
vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
vec4 outputColor = textureColor * colorMatrix;
gl_FragColor = (intensity * outputColor) + ((1.0 - intensity) * textureColor);
}
);
#endif
@implementation GPUImageColorMatrixFilter
@synthesize intensity = _intensity;
@synthesize colorMatrix = _colorMatrix;
#pragma mark -
#pragma mark Initialization and teardown
- (id)init;
{
if (!(self = [super initWithFragmentShaderFromString:kGPUImageColorMatrixFragmentShaderString]))
{
return nil;
}
colorMatrixUniform = [filterProgram uniformIndex:@"colorMatrix"];
intensityUniform = [filterProgram uniformIndex:@"intensity"];
self.intensity = 1.f;
self.colorMatrix = (GPUMatrix4x4){
{1.f, 0.f, 0.f, 0.f},
{0.f, 1.f, 0.f, 0.f},
{0.f, 0.f, 1.f, 0.f},
{0.f, 0.f, 0.f, 1.f}
};
return self;
}
#pragma mark -
#pragma mark Accessors
- (void)setIntensity:(CGFloat)newIntensity;
{
_intensity = newIntensity;
[self setFloat:_intensity forUniform:intensityUniform program:filterProgram];
}
- (void)setColorMatrix:(GPUMatrix4x4)newColorMatrix;
{
_colorMatrix = newColorMatrix;
[self setMatrix4f:_colorMatrix forUniform:colorMatrixUniform program:filterProgram];
}
@end

View File

@ -0,0 +1,10 @@
#import "GPUImageFilter.h"
@interface GPUImageColorPackingFilter : GPUImageFilter
{
GLint texelWidthUniform, texelHeightUniform;
CGFloat texelWidth, texelHeight;
}
@end

View File

@ -0,0 +1,139 @@
#import "GPUImageColorPackingFilter.h"
NSString *const kGPUImageColorPackingVertexShaderString = SHADER_STRING
(
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
uniform float texelWidth;
uniform float texelHeight;
varying vec2 upperLeftInputTextureCoordinate;
varying vec2 upperRightInputTextureCoordinate;
varying vec2 lowerLeftInputTextureCoordinate;
varying vec2 lowerRightInputTextureCoordinate;
void main()
{
gl_Position = position;
upperLeftInputTextureCoordinate = inputTextureCoordinate.xy + vec2(-texelWidth, -texelHeight);
upperRightInputTextureCoordinate = inputTextureCoordinate.xy + vec2(texelWidth, -texelHeight);
lowerLeftInputTextureCoordinate = inputTextureCoordinate.xy + vec2(-texelWidth, texelHeight);
lowerRightInputTextureCoordinate = inputTextureCoordinate.xy + vec2(texelWidth, texelHeight);
}
);
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUImageColorPackingFragmentShaderString = SHADER_STRING
(
precision lowp float;
uniform sampler2D inputImageTexture;
uniform mediump mat3 convolutionMatrix;
varying highp vec2 outputTextureCoordinate;
varying highp vec2 upperLeftInputTextureCoordinate;
varying highp vec2 upperRightInputTextureCoordinate;
varying highp vec2 lowerLeftInputTextureCoordinate;
varying highp vec2 lowerRightInputTextureCoordinate;
void main()
{
float upperLeftIntensity = texture2D(inputImageTexture, upperLeftInputTextureCoordinate).r;
float upperRightIntensity = texture2D(inputImageTexture, upperRightInputTextureCoordinate).r;
float lowerLeftIntensity = texture2D(inputImageTexture, lowerLeftInputTextureCoordinate).r;
float lowerRightIntensity = texture2D(inputImageTexture, lowerRightInputTextureCoordinate).r;
gl_FragColor = vec4(upperLeftIntensity, upperRightIntensity, lowerLeftIntensity, lowerRightIntensity);
}
);
#else
NSString *const kGPUImageColorPackingFragmentShaderString = SHADER_STRING
(
uniform sampler2D inputImageTexture;
uniform mat3 convolutionMatrix;
varying vec2 outputTextureCoordinate;
varying vec2 upperLeftInputTextureCoordinate;
varying vec2 upperRightInputTextureCoordinate;
varying vec2 lowerLeftInputTextureCoordinate;
varying vec2 lowerRightInputTextureCoordinate;
void main()
{
float upperLeftIntensity = texture2D(inputImageTexture, upperLeftInputTextureCoordinate).r;
float upperRightIntensity = texture2D(inputImageTexture, upperRightInputTextureCoordinate).r;
float lowerLeftIntensity = texture2D(inputImageTexture, lowerLeftInputTextureCoordinate).r;
float lowerRightIntensity = texture2D(inputImageTexture, lowerRightInputTextureCoordinate).r;
gl_FragColor = vec4(upperLeftIntensity, upperRightIntensity, lowerLeftIntensity, lowerRightIntensity);
}
);
#endif
@implementation GPUImageColorPackingFilter
#pragma mark -
#pragma mark Initialization and teardown
- (id)init;
{
if (!(self = [super initWithVertexShaderFromString:kGPUImageColorPackingVertexShaderString fragmentShaderFromString:kGPUImageColorPackingFragmentShaderString]))
{
return nil;
}
texelWidthUniform = [filterProgram uniformIndex:@"texelWidth"];
texelHeightUniform = [filterProgram uniformIndex:@"texelHeight"];
return self;
}
- (void)setupFilterForSize:(CGSize)filterFrameSize;
{
texelWidth = 0.5 / inputTextureSize.width;
texelHeight = 0.5 / inputTextureSize.height;
runSynchronouslyOnVideoProcessingQueue(^{
[GPUImageContext setActiveShaderProgram:filterProgram];
glUniform1f(texelWidthUniform, texelWidth);
glUniform1f(texelHeightUniform, texelHeight);
});
}
#pragma mark -
#pragma mark Managing the display FBOs
- (CGSize)sizeOfFBO;
{
CGSize outputSize = [self maximumOutputSize];
if ( (CGSizeEqualToSize(outputSize, CGSizeZero)) || (inputTextureSize.width < outputSize.width) )
{
CGSize quarterSize;
quarterSize.width = inputTextureSize.width / 2.0;
quarterSize.height = inputTextureSize.height / 2.0;
return quarterSize;
}
else
{
return outputSize;
}
}
#pragma mark -
#pragma mark Rendering
- (CGSize)outputFrameSize;
{
CGSize quarterSize;
quarterSize.width = inputTextureSize.width / 2.0;
quarterSize.height = inputTextureSize.height / 2.0;
return quarterSize;
}
@end

View File

@ -0,0 +1,14 @@
#import "GPUImageFilter.h"
/** Adjusts the contrast of the image
*/
@interface GPUImageContrastFilter : GPUImageFilter
{
GLint contrastUniform;
}
/** Contrast ranges from 0.0 to 4.0 (max contrast), with 1.0 as the normal level
*/
@property(readwrite, nonatomic) CGFloat contrast;
@end

View File

@ -0,0 +1,66 @@
#import "GPUImageContrastFilter.h"
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUImageContrastFragmentShaderString = SHADER_STRING
(
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform lowp float contrast;
void main()
{
lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
gl_FragColor = vec4(((textureColor.rgb - vec3(0.5)) * contrast + vec3(0.5)), textureColor.w);
}
);
#else
NSString *const kGPUImageContrastFragmentShaderString = SHADER_STRING
(
varying vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform float contrast;
void main()
{
vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
gl_FragColor = vec4(((textureColor.rgb - vec3(0.5)) * contrast + vec3(0.5)), textureColor.w);
}
);
#endif
@implementation GPUImageContrastFilter
@synthesize contrast = _contrast;
#pragma mark -
#pragma mark Initialization
- (id)init;
{
if (!(self = [super initWithFragmentShaderFromString:kGPUImageContrastFragmentShaderString]))
{
return nil;
}
contrastUniform = [filterProgram uniformIndex:@"contrast"];
self.contrast = 1.0;
return self;
}
#pragma mark -
#pragma mark Accessors
- (void)setContrast:(CGFloat)newValue;
{
_contrast = newValue;
[self setFloat:_contrast forUniform:contrastUniform program:filterProgram];
}
@end

View File

@ -0,0 +1,14 @@
#import "GPUImageFilter.h"
@interface GPUImageCropFilter : GPUImageFilter
{
GLfloat cropTextureCoordinates[8];
}
// The crop region is the rectangle within the image to crop. It is normalized to a coordinate space from 0.0 to 1.0, with 0.0, 0.0 being the upper left corner of the image
@property(readwrite, nonatomic) CGRect cropRegion;
// Initialization and teardown
- (id)initWithCropRegion:(CGRect)newCropRegion;
@end

View File

@ -0,0 +1,274 @@
#import "GPUImageCropFilter.h"
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUImageCropFragmentShaderString = SHADER_STRING
(
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
gl_FragColor = texture2D(inputImageTexture, textureCoordinate);
}
);
#else
NSString *const kGPUImageCropFragmentShaderString = SHADER_STRING
(
varying vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
gl_FragColor = texture2D(inputImageTexture, textureCoordinate);
}
);
#endif
@interface GPUImageCropFilter ()
- (void)calculateCropTextureCoordinates;
@end
@interface GPUImageCropFilter()
{
CGSize originallySuppliedInputSize;
}
@end
@implementation GPUImageCropFilter
@synthesize cropRegion = _cropRegion;
#pragma mark -
#pragma mark Initialization and teardown
- (id)initWithCropRegion:(CGRect)newCropRegion;
{
if (!(self = [super initWithFragmentShaderFromString:kGPUImageCropFragmentShaderString]))
{
return nil;
}
self.cropRegion = newCropRegion;
return self;
}
- (id)init;
{
if (!(self = [self initWithCropRegion:CGRectMake(0.0, 0.0, 1.0, 1.0)]))
{
return nil;
}
return self;
}
#pragma mark -
#pragma mark Rendering
- (void)setInputSize:(CGSize)newSize atIndex:(NSInteger)textureIndex;
{
if (self.preventRendering)
{
return;
}
// if (overrideInputSize)
// {
// if (CGSizeEqualToSize(forcedMaximumSize, CGSizeZero))
// {
// return;
// }
// else
// {
// CGRect insetRect = AVMakeRectWithAspectRatioInsideRect(newSize, CGRectMake(0.0, 0.0, forcedMaximumSize.width, forcedMaximumSize.height));
// inputTextureSize = insetRect.size;
// return;
// }
// }
CGSize rotatedSize = [self rotatedSize:newSize forIndex:textureIndex];
originallySuppliedInputSize = rotatedSize;
CGSize scaledSize;
scaledSize.width = rotatedSize.width * _cropRegion.size.width;
scaledSize.height = rotatedSize.height * _cropRegion.size.height;
if (CGSizeEqualToSize(scaledSize, CGSizeZero))
{
inputTextureSize = scaledSize;
}
else if (!CGSizeEqualToSize(inputTextureSize, scaledSize))
{
inputTextureSize = scaledSize;
}
}
#pragma mark -
#pragma mark GPUImageInput
- (void)calculateCropTextureCoordinates;
{
CGFloat minX = _cropRegion.origin.x;
CGFloat minY = _cropRegion.origin.y;
CGFloat maxX = CGRectGetMaxX(_cropRegion);
CGFloat maxY = CGRectGetMaxY(_cropRegion);
switch(inputRotation)
{
case kGPUImageNoRotation: // Works
{
cropTextureCoordinates[0] = minX; // 0,0
cropTextureCoordinates[1] = minY;
cropTextureCoordinates[2] = maxX; // 1,0
cropTextureCoordinates[3] = minY;
cropTextureCoordinates[4] = minX; // 0,1
cropTextureCoordinates[5] = maxY;
cropTextureCoordinates[6] = maxX; // 1,1
cropTextureCoordinates[7] = maxY;
}; break;
case kGPUImageRotateLeft: // Fixed
{
cropTextureCoordinates[0] = maxY; // 1,0
cropTextureCoordinates[1] = 1.0 - maxX;
cropTextureCoordinates[2] = maxY; // 1,1
cropTextureCoordinates[3] = 1.0 - minX;
cropTextureCoordinates[4] = minY; // 0,0
cropTextureCoordinates[5] = 1.0 - maxX;
cropTextureCoordinates[6] = minY; // 0,1
cropTextureCoordinates[7] = 1.0 - minX;
}; break;
case kGPUImageRotateRight: // Fixed
{
cropTextureCoordinates[0] = minY; // 0,1
cropTextureCoordinates[1] = 1.0 - minX;
cropTextureCoordinates[2] = minY; // 0,0
cropTextureCoordinates[3] = 1.0 - maxX;
cropTextureCoordinates[4] = maxY; // 1,1
cropTextureCoordinates[5] = 1.0 - minX;
cropTextureCoordinates[6] = maxY; // 1,0
cropTextureCoordinates[7] = 1.0 - maxX;
}; break;
case kGPUImageFlipVertical: // Works for me
{
cropTextureCoordinates[0] = minX; // 0,1
cropTextureCoordinates[1] = maxY;
cropTextureCoordinates[2] = maxX; // 1,1
cropTextureCoordinates[3] = maxY;
cropTextureCoordinates[4] = minX; // 0,0
cropTextureCoordinates[5] = minY;
cropTextureCoordinates[6] = maxX; // 1,0
cropTextureCoordinates[7] = minY;
}; break;
case kGPUImageFlipHorizonal: // Works for me
{
cropTextureCoordinates[0] = maxX; // 1,0
cropTextureCoordinates[1] = minY;
cropTextureCoordinates[2] = minX; // 0,0
cropTextureCoordinates[3] = minY;
cropTextureCoordinates[4] = maxX; // 1,1
cropTextureCoordinates[5] = maxY;
cropTextureCoordinates[6] = minX; // 0,1
cropTextureCoordinates[7] = maxY;
}; break;
case kGPUImageRotate180: // Fixed
{
cropTextureCoordinates[0] = maxX; // 1,1
cropTextureCoordinates[1] = maxY;
cropTextureCoordinates[2] = minX; // 0,1
cropTextureCoordinates[3] = maxY;
cropTextureCoordinates[4] = maxX; // 1,0
cropTextureCoordinates[5] = minY;
cropTextureCoordinates[6] = minX; // 0,0
cropTextureCoordinates[7] = minY;
}; break;
case kGPUImageRotateRightFlipVertical: // Fixed
{
cropTextureCoordinates[0] = minY; // 0,0
cropTextureCoordinates[1] = 1.0 - maxX;
cropTextureCoordinates[2] = minY; // 0,1
cropTextureCoordinates[3] = 1.0 - minX;
cropTextureCoordinates[4] = maxY; // 1,0
cropTextureCoordinates[5] = 1.0 - maxX;
cropTextureCoordinates[6] = maxY; // 1,1
cropTextureCoordinates[7] = 1.0 - minX;
}; break;
case kGPUImageRotateRightFlipHorizontal: // Fixed
{
cropTextureCoordinates[0] = maxY; // 1,1
cropTextureCoordinates[1] = 1.0 - minX;
cropTextureCoordinates[2] = maxY; // 1,0
cropTextureCoordinates[3] = 1.0 - maxX;
cropTextureCoordinates[4] = minY; // 0,1
cropTextureCoordinates[5] = 1.0 - minX;
cropTextureCoordinates[6] = minY; // 0,0
cropTextureCoordinates[7] = 1.0 - maxX;
}; break;
}
}
- (void)newFrameReadyAtTime:(CMTime)frameTime atIndex:(NSInteger)textureIndex;
{
static const GLfloat cropSquareVertices[] = {
-1.0f, -1.0f,
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, 1.0f,
};
[self renderToTextureWithVertices:cropSquareVertices textureCoordinates:cropTextureCoordinates];
[self informTargetsAboutNewFrameAtTime:frameTime];
}
#pragma mark -
#pragma mark Accessors
- (void)setCropRegion:(CGRect)newValue;
{
NSParameterAssert(newValue.origin.x >= 0 && newValue.origin.x <= 1 &&
newValue.origin.y >= 0 && newValue.origin.y <= 1 &&
newValue.size.width >= 0 && newValue.size.width <= 1 &&
newValue.size.height >= 0 && newValue.size.height <= 1);
_cropRegion = newValue;
[self calculateCropTextureCoordinates];
}
- (void)setInputRotation:(GPUImageRotationMode)newInputRotation atIndex:(NSInteger)textureIndex;
{
[super setInputRotation:newInputRotation atIndex:textureIndex];
[self calculateCropTextureCoordinates];
}
@end

View File

@ -0,0 +1,17 @@
#import "GPUImageFilter.h"
@interface GPUImageCrosshairGenerator : GPUImageFilter
{
GLint crosshairWidthUniform, crosshairColorUniform;
}
// The width of the displayed crosshairs, in pixels. Currently this only works well for odd widths. The default is 5.
@property(readwrite, nonatomic) CGFloat crosshairWidth;
// The color of the crosshairs is specified using individual red, green, and blue components (normalized to 1.0). The default is green: (0.0, 1.0, 0.0).
- (void)setCrosshairColorRed:(GLfloat)redComponent green:(GLfloat)greenComponent blue:(GLfloat)blueComponent;
// Rendering
- (void)renderCrosshairsFromArray:(GLfloat *)crosshairCoordinates count:(NSUInteger)numberOfCrosshairs frameTime:(CMTime)frameTime;
@end

View File

@ -0,0 +1,139 @@
#import "GPUImageCrosshairGenerator.h"
NSString *const kGPUImageCrosshairVertexShaderString = SHADER_STRING
(
attribute vec4 position;
uniform float crosshairWidth;
varying vec2 centerLocation;
varying float pointSpacing;
void main()
{
gl_Position = vec4(((position.xy * 2.0) - 1.0), 0.0, 1.0);
gl_PointSize = crosshairWidth + 1.0;
pointSpacing = 1.0 / crosshairWidth;
centerLocation = vec2(pointSpacing * ceil(crosshairWidth / 2.0), pointSpacing * ceil(crosshairWidth / 2.0));
}
);
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUImageCrosshairFragmentShaderString = SHADER_STRING
(
uniform lowp vec3 crosshairColor;
varying highp vec2 centerLocation;
varying highp float pointSpacing;
void main()
{
lowp vec2 distanceFromCenter = abs(centerLocation - gl_PointCoord.xy);
lowp float axisTest = step(pointSpacing, gl_PointCoord.y) * step(distanceFromCenter.x, 0.09) + step(pointSpacing, gl_PointCoord.x) * step(distanceFromCenter.y, 0.09);
gl_FragColor = vec4(crosshairColor * axisTest, axisTest);
// gl_FragColor = vec4(distanceFromCenterInX, distanceFromCenterInY, 0.0, 1.0);
}
);
#else
NSString *const kGPUImageCrosshairFragmentShaderString = SHADER_STRING
(
GPUImageEscapedHashIdentifier(version 120)\n
uniform vec3 crosshairColor;
varying vec2 centerLocation;
varying float pointSpacing;
void main()
{
vec2 distanceFromCenter = abs(centerLocation - gl_PointCoord.xy);
float axisTest = step(pointSpacing, gl_PointCoord.y) * step(distanceFromCenter.x, 0.09) + step(pointSpacing, gl_PointCoord.x) * step(distanceFromCenter.y, 0.09);
gl_FragColor = vec4(crosshairColor * axisTest, axisTest);
// gl_FragColor = vec4(distanceFromCenterInX, distanceFromCenterInY, 0.0, 1.0);
}
);
#endif
@implementation GPUImageCrosshairGenerator
@synthesize crosshairWidth = _crosshairWidth;
#pragma mark -
#pragma mark Initialization and teardown
- (id)init;
{
if (!(self = [super initWithVertexShaderFromString:kGPUImageCrosshairVertexShaderString fragmentShaderFromString:kGPUImageCrosshairFragmentShaderString]))
{
return nil;
}
runSynchronouslyOnVideoProcessingQueue(^{
crosshairWidthUniform = [filterProgram uniformIndex:@"crosshairWidth"];
crosshairColorUniform = [filterProgram uniformIndex:@"crosshairColor"];
self.crosshairWidth = 5.0;
[self setCrosshairColorRed:0.0 green:1.0 blue:0.0];
});
return self;
}
#pragma mark -
#pragma mark Rendering
- (void)renderCrosshairsFromArray:(GLfloat *)crosshairCoordinates count:(NSUInteger)numberOfCrosshairs frameTime:(CMTime)frameTime;
{
if (self.preventRendering)
{
return;
}
runSynchronouslyOnVideoProcessingQueue(^{
[GPUImageContext setActiveShaderProgram:filterProgram];
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
#else
glEnable(GL_POINT_SPRITE);
glEnable(GL_VERTEX_PROGRAM_POINT_SIZE);
#endif
outputFramebuffer = [[GPUImageContext sharedFramebufferCache] fetchFramebufferForSize:[self sizeOfFBO] textureOptions:self.outputTextureOptions onlyTexture:NO];
[outputFramebuffer activateFramebuffer];
glClearColor(0.0, 0.0, 0.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT);
glVertexAttribPointer(filterPositionAttribute, 2, GL_FLOAT, 0, 0, crosshairCoordinates);
glDrawArrays(GL_POINTS, 0, (GLsizei)numberOfCrosshairs);
[self informTargetsAboutNewFrameAtTime:frameTime];
});
}
- (void)renderToTextureWithVertices:(const GLfloat *)vertices textureCoordinates:(const GLfloat *)textureCoordinates;
{
// Prevent rendering of the frame by normal means
}
#pragma mark -
#pragma mark Accessors
- (void)setCrosshairWidth:(CGFloat)newValue;
{
_crosshairWidth = newValue;
[self setFloat:_crosshairWidth forUniform:crosshairWidthUniform program:filterProgram];
}
- (void)setCrosshairColorRed:(GLfloat)redComponent green:(GLfloat)greenComponent blue:(GLfloat)blueComponent;
{
GPUVector3 crosshairColor = {redComponent, greenComponent, blueComponent};
[self setVec3:crosshairColor forUniform:crosshairColorUniform program:filterProgram];
}
@end

View File

@ -0,0 +1,13 @@
#import "GPUImageFilter.h"
@interface GPUImageCrosshatchFilter : GPUImageFilter
{
GLint crossHatchSpacingUniform, lineWidthUniform;
}
// The fractional width of the image to use as the spacing for the crosshatch. The default is 0.03.
@property(readwrite, nonatomic) CGFloat crossHatchSpacing;
// A relative width for the crosshatch lines. The default is 0.003.
@property(readwrite, nonatomic) CGFloat lineWidth;
@end

View File

@ -0,0 +1,163 @@
#import "GPUImageCrosshatchFilter.h"
// Shader code based on http://machinesdontcare.wordpress.com/
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUImageCrosshatchFragmentShaderString = SHADER_STRING
(
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform highp float crossHatchSpacing;
uniform highp float lineWidth;
const highp vec3 W = vec3(0.2125, 0.7154, 0.0721);
void main()
{
highp float luminance = dot(texture2D(inputImageTexture, textureCoordinate).rgb, W);
lowp vec4 colorToDisplay = vec4(1.0, 1.0, 1.0, 1.0);
if (luminance < 1.00)
{
if (mod(textureCoordinate.x + textureCoordinate.y, crossHatchSpacing) <= lineWidth)
{
colorToDisplay = vec4(0.0, 0.0, 0.0, 1.0);
}
}
if (luminance < 0.75)
{
if (mod(textureCoordinate.x - textureCoordinate.y, crossHatchSpacing) <= lineWidth)
{
colorToDisplay = vec4(0.0, 0.0, 0.0, 1.0);
}
}
if (luminance < 0.50)
{
if (mod(textureCoordinate.x + textureCoordinate.y - (crossHatchSpacing / 2.0), crossHatchSpacing) <= lineWidth)
{
colorToDisplay = vec4(0.0, 0.0, 0.0, 1.0);
}
}
if (luminance < 0.3)
{
if (mod(textureCoordinate.x - textureCoordinate.y - (crossHatchSpacing / 2.0), crossHatchSpacing) <= lineWidth)
{
colorToDisplay = vec4(0.0, 0.0, 0.0, 1.0);
}
}
gl_FragColor = colorToDisplay;
}
);
#else
NSString *const kGPUImageCrosshatchFragmentShaderString = SHADER_STRING
(
varying vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform float crossHatchSpacing;
uniform float lineWidth;
const vec3 W = vec3(0.2125, 0.7154, 0.0721);
void main()
{
float luminance = dot(texture2D(inputImageTexture, textureCoordinate).rgb, W);
vec4 colorToDisplay = vec4(1.0, 1.0, 1.0, 1.0);
if (luminance < 1.00)
{
if (mod(textureCoordinate.x + textureCoordinate.y, crossHatchSpacing) <= lineWidth)
{
colorToDisplay = vec4(0.0, 0.0, 0.0, 1.0);
}
}
if (luminance < 0.75)
{
if (mod(textureCoordinate.x - textureCoordinate.y, crossHatchSpacing) <= lineWidth)
{
colorToDisplay = vec4(0.0, 0.0, 0.0, 1.0);
}
}
if (luminance < 0.50)
{
if (mod(textureCoordinate.x + textureCoordinate.y - (crossHatchSpacing / 2.0), crossHatchSpacing) <= lineWidth)
{
colorToDisplay = vec4(0.0, 0.0, 0.0, 1.0);
}
}
if (luminance < 0.3)
{
if (mod(textureCoordinate.x - textureCoordinate.y - (crossHatchSpacing / 2.0), crossHatchSpacing) <= lineWidth)
{
colorToDisplay = vec4(0.0, 0.0, 0.0, 1.0);
}
}
gl_FragColor = colorToDisplay;
}
);
#endif
@implementation GPUImageCrosshatchFilter
@synthesize crossHatchSpacing = _crossHatchSpacing;
@synthesize lineWidth = _lineWidth;
#pragma mark -
#pragma mark Initialization and teardown
- (id)init;
{
if (!(self = [super initWithFragmentShaderFromString:kGPUImageCrosshatchFragmentShaderString]))
{
return nil;
}
crossHatchSpacingUniform = [filterProgram uniformIndex:@"crossHatchSpacing"];
lineWidthUniform = [filterProgram uniformIndex:@"lineWidth"];
self.crossHatchSpacing = 0.03;
self.lineWidth = 0.003;
return self;
}
#pragma mark -
#pragma mark Accessors
- (void)setCrossHatchSpacing:(CGFloat)newValue;
{
CGFloat singlePixelSpacing;
if (inputTextureSize.width != 0.0)
{
singlePixelSpacing = 1.0 / inputTextureSize.width;
}
else
{
singlePixelSpacing = 1.0 / 2048.0;
}
if (newValue < singlePixelSpacing)
{
_crossHatchSpacing = singlePixelSpacing;
}
else
{
_crossHatchSpacing = newValue;
}
[self setFloat:_crossHatchSpacing forUniform:crossHatchSpacingUniform program:filterProgram];
}
- (void)setLineWidth:(CGFloat)newValue;
{
_lineWidth = newValue;
[self setFloat:_lineWidth forUniform:lineWidthUniform program:filterProgram];
}
@end

View File

@ -0,0 +1,7 @@
#import "GPUImageTwoInputFilter.h"
@interface GPUImageDarkenBlendFilter : GPUImageTwoInputFilter
{
}
@end

View File

@ -0,0 +1,52 @@
#import "GPUImageDarkenBlendFilter.h"
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUImageDarkenBlendFragmentShaderString = SHADER_STRING
(
varying highp vec2 textureCoordinate;
varying highp vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
void main()
{
lowp vec4 base = texture2D(inputImageTexture, textureCoordinate);
lowp vec4 overlayer = texture2D(inputImageTexture2, textureCoordinate2);
gl_FragColor = vec4(min(overlayer.rgb * base.a, base.rgb * overlayer.a) + overlayer.rgb * (1.0 - base.a) + base.rgb * (1.0 - overlayer.a), 1.0);
}
);
#else
NSString *const kGPUImageDarkenBlendFragmentShaderString = SHADER_STRING
(
varying vec2 textureCoordinate;
varying vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
void main()
{
vec4 base = texture2D(inputImageTexture, textureCoordinate);
vec4 overlayer = texture2D(inputImageTexture2, textureCoordinate2);
gl_FragColor = vec4(min(overlayer.rgb * base.a, base.rgb * overlayer.a) + overlayer.rgb * (1.0 - base.a) + base.rgb * (1.0 - overlayer.a), 1.0);
}
);
#endif
@implementation GPUImageDarkenBlendFilter
- (id)init;
{
if (!(self = [super initWithFragmentShaderFromString:kGPUImageDarkenBlendFragmentShaderString]))
{
return nil;
}
return self;
}
@end

View File

@ -0,0 +1,7 @@
#import "GPUImageTwoInputFilter.h"
@interface GPUImageDifferenceBlendFilter : GPUImageTwoInputFilter
{
}
@end

View File

@ -0,0 +1,50 @@
#import "GPUImageDifferenceBlendFilter.h"
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUImageDifferenceBlendFragmentShaderString = SHADER_STRING
(
varying highp vec2 textureCoordinate;
varying highp vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
void main()
{
mediump vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
mediump vec4 textureColor2 = texture2D(inputImageTexture2, textureCoordinate2);
gl_FragColor = vec4(abs(textureColor2.rgb - textureColor.rgb), textureColor.a);
}
);
#else
NSString *const kGPUImageDifferenceBlendFragmentShaderString = SHADER_STRING
(
varying vec2 textureCoordinate;
varying vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
void main()
{
vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
vec4 textureColor2 = texture2D(inputImageTexture2, textureCoordinate2);
gl_FragColor = vec4(abs(textureColor2.rgb - textureColor.rgb), textureColor.a);
}
);
#endif
@implementation GPUImageDifferenceBlendFilter
- (id)init;
{
if (!(self = [super initWithFragmentShaderFromString:kGPUImageDifferenceBlendFragmentShaderString]))
{
return nil;
}
return self;
}
@end

View File

@ -0,0 +1,16 @@
#import "GPUImageTwoPassTextureSamplingFilter.h"
// For each pixel, this sets it to the maximum value of the red channel in a rectangular neighborhood extending out dilationRadius pixels from the center.
// This extends out bright features, and is most commonly used with black-and-white thresholded images.
extern NSString *const kGPUImageDilationRadiusOneVertexShaderString;
extern NSString *const kGPUImageDilationRadiusTwoVertexShaderString;
extern NSString *const kGPUImageDilationRadiusThreeVertexShaderString;
extern NSString *const kGPUImageDilationRadiusFourVertexShaderString;
@interface GPUImageDilationFilter : GPUImageTwoPassTextureSamplingFilter
// Acceptable values for dilationRadius, which sets the distance in pixels to sample out from the center, are 1, 2, 3, and 4.
- (id)initWithRadius:(NSUInteger)dilationRadius;
@end

View File

@ -0,0 +1,431 @@
#import "GPUImageDilationFilter.h"
@implementation GPUImageDilationFilter
NSString *const kGPUImageDilationRadiusOneVertexShaderString = SHADER_STRING
(
attribute vec4 position;
attribute vec2 inputTextureCoordinate;
uniform float texelWidthOffset;
uniform float texelHeightOffset;
varying vec2 centerTextureCoordinate;
varying vec2 oneStepPositiveTextureCoordinate;
varying vec2 oneStepNegativeTextureCoordinate;
void main()
{
gl_Position = position;
vec2 offset = vec2(texelWidthOffset, texelHeightOffset);
centerTextureCoordinate = inputTextureCoordinate;
oneStepNegativeTextureCoordinate = inputTextureCoordinate - offset;
oneStepPositiveTextureCoordinate = inputTextureCoordinate + offset;
}
);
NSString *const kGPUImageDilationRadiusTwoVertexShaderString = SHADER_STRING
(
attribute vec4 position;
attribute vec2 inputTextureCoordinate;
uniform float texelWidthOffset;
uniform float texelHeightOffset;
varying vec2 centerTextureCoordinate;
varying vec2 oneStepPositiveTextureCoordinate;
varying vec2 oneStepNegativeTextureCoordinate;
varying vec2 twoStepsPositiveTextureCoordinate;
varying vec2 twoStepsNegativeTextureCoordinate;
void main()
{
gl_Position = position;
vec2 offset = vec2(texelWidthOffset, texelHeightOffset);
centerTextureCoordinate = inputTextureCoordinate;
oneStepNegativeTextureCoordinate = inputTextureCoordinate - offset;
oneStepPositiveTextureCoordinate = inputTextureCoordinate + offset;
twoStepsNegativeTextureCoordinate = inputTextureCoordinate - (offset * 2.0);
twoStepsPositiveTextureCoordinate = inputTextureCoordinate + (offset * 2.0);
}
);
NSString *const kGPUImageDilationRadiusThreeVertexShaderString = SHADER_STRING
(
attribute vec4 position;
attribute vec2 inputTextureCoordinate;
uniform float texelWidthOffset;
uniform float texelHeightOffset;
varying vec2 centerTextureCoordinate;
varying vec2 oneStepPositiveTextureCoordinate;
varying vec2 oneStepNegativeTextureCoordinate;
varying vec2 twoStepsPositiveTextureCoordinate;
varying vec2 twoStepsNegativeTextureCoordinate;
varying vec2 threeStepsPositiveTextureCoordinate;
varying vec2 threeStepsNegativeTextureCoordinate;
void main()
{
gl_Position = position;
vec2 offset = vec2(texelWidthOffset, texelHeightOffset);
centerTextureCoordinate = inputTextureCoordinate;
oneStepNegativeTextureCoordinate = inputTextureCoordinate - offset;
oneStepPositiveTextureCoordinate = inputTextureCoordinate + offset;
twoStepsNegativeTextureCoordinate = inputTextureCoordinate - (offset * 2.0);
twoStepsPositiveTextureCoordinate = inputTextureCoordinate + (offset * 2.0);
threeStepsNegativeTextureCoordinate = inputTextureCoordinate - (offset * 3.0);
threeStepsPositiveTextureCoordinate = inputTextureCoordinate + (offset * 3.0);
}
);
NSString *const kGPUImageDilationRadiusFourVertexShaderString = SHADER_STRING
(
attribute vec4 position;
attribute vec2 inputTextureCoordinate;
uniform float texelWidthOffset;
uniform float texelHeightOffset;
varying vec2 centerTextureCoordinate;
varying vec2 oneStepPositiveTextureCoordinate;
varying vec2 oneStepNegativeTextureCoordinate;
varying vec2 twoStepsPositiveTextureCoordinate;
varying vec2 twoStepsNegativeTextureCoordinate;
varying vec2 threeStepsPositiveTextureCoordinate;
varying vec2 threeStepsNegativeTextureCoordinate;
varying vec2 fourStepsPositiveTextureCoordinate;
varying vec2 fourStepsNegativeTextureCoordinate;
void main()
{
gl_Position = position;
vec2 offset = vec2(texelWidthOffset, texelHeightOffset);
centerTextureCoordinate = inputTextureCoordinate;
oneStepNegativeTextureCoordinate = inputTextureCoordinate - offset;
oneStepPositiveTextureCoordinate = inputTextureCoordinate + offset;
twoStepsNegativeTextureCoordinate = inputTextureCoordinate - (offset * 2.0);
twoStepsPositiveTextureCoordinate = inputTextureCoordinate + (offset * 2.0);
threeStepsNegativeTextureCoordinate = inputTextureCoordinate - (offset * 3.0);
threeStepsPositiveTextureCoordinate = inputTextureCoordinate + (offset * 3.0);
fourStepsNegativeTextureCoordinate = inputTextureCoordinate - (offset * 4.0);
fourStepsPositiveTextureCoordinate = inputTextureCoordinate + (offset * 4.0);
}
);
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUImageDilationRadiusOneFragmentShaderString = SHADER_STRING
(
precision lowp float;
varying vec2 centerTextureCoordinate;
varying vec2 oneStepPositiveTextureCoordinate;
varying vec2 oneStepNegativeTextureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
float centerIntensity = texture2D(inputImageTexture, centerTextureCoordinate).r;
float oneStepPositiveIntensity = texture2D(inputImageTexture, oneStepPositiveTextureCoordinate).r;
float oneStepNegativeIntensity = texture2D(inputImageTexture, oneStepNegativeTextureCoordinate).r;
lowp float maxValue = max(centerIntensity, oneStepPositiveIntensity);
maxValue = max(maxValue, oneStepNegativeIntensity);
gl_FragColor = vec4(vec3(maxValue), 1.0);
}
);
NSString *const kGPUImageDilationRadiusTwoFragmentShaderString = SHADER_STRING
(
precision lowp float;
varying vec2 centerTextureCoordinate;
varying vec2 oneStepPositiveTextureCoordinate;
varying vec2 oneStepNegativeTextureCoordinate;
varying vec2 twoStepsPositiveTextureCoordinate;
varying vec2 twoStepsNegativeTextureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
float centerIntensity = texture2D(inputImageTexture, centerTextureCoordinate).r;
float oneStepPositiveIntensity = texture2D(inputImageTexture, oneStepPositiveTextureCoordinate).r;
float oneStepNegativeIntensity = texture2D(inputImageTexture, oneStepNegativeTextureCoordinate).r;
float twoStepsPositiveIntensity = texture2D(inputImageTexture, twoStepsPositiveTextureCoordinate).r;
float twoStepsNegativeIntensity = texture2D(inputImageTexture, twoStepsNegativeTextureCoordinate).r;
lowp float maxValue = max(centerIntensity, oneStepPositiveIntensity);
maxValue = max(maxValue, oneStepNegativeIntensity);
maxValue = max(maxValue, twoStepsPositiveIntensity);
maxValue = max(maxValue, twoStepsNegativeIntensity);
gl_FragColor = vec4(vec3(maxValue), 1.0);
}
);
NSString *const kGPUImageDilationRadiusThreeFragmentShaderString = SHADER_STRING
(
precision lowp float;
varying vec2 centerTextureCoordinate;
varying vec2 oneStepPositiveTextureCoordinate;
varying vec2 oneStepNegativeTextureCoordinate;
varying vec2 twoStepsPositiveTextureCoordinate;
varying vec2 twoStepsNegativeTextureCoordinate;
varying vec2 threeStepsPositiveTextureCoordinate;
varying vec2 threeStepsNegativeTextureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
float centerIntensity = texture2D(inputImageTexture, centerTextureCoordinate).r;
float oneStepPositiveIntensity = texture2D(inputImageTexture, oneStepPositiveTextureCoordinate).r;
float oneStepNegativeIntensity = texture2D(inputImageTexture, oneStepNegativeTextureCoordinate).r;
float twoStepsPositiveIntensity = texture2D(inputImageTexture, twoStepsPositiveTextureCoordinate).r;
float twoStepsNegativeIntensity = texture2D(inputImageTexture, twoStepsNegativeTextureCoordinate).r;
float threeStepsPositiveIntensity = texture2D(inputImageTexture, threeStepsPositiveTextureCoordinate).r;
float threeStepsNegativeIntensity = texture2D(inputImageTexture, threeStepsNegativeTextureCoordinate).r;
lowp float maxValue = max(centerIntensity, oneStepPositiveIntensity);
maxValue = max(maxValue, oneStepNegativeIntensity);
maxValue = max(maxValue, twoStepsPositiveIntensity);
maxValue = max(maxValue, twoStepsNegativeIntensity);
maxValue = max(maxValue, threeStepsPositiveIntensity);
maxValue = max(maxValue, threeStepsNegativeIntensity);
gl_FragColor = vec4(vec3(maxValue), 1.0);
}
);
NSString *const kGPUImageDilationRadiusFourFragmentShaderString = SHADER_STRING
(
precision lowp float;
varying vec2 centerTextureCoordinate;
varying vec2 oneStepPositiveTextureCoordinate;
varying vec2 oneStepNegativeTextureCoordinate;
varying vec2 twoStepsPositiveTextureCoordinate;
varying vec2 twoStepsNegativeTextureCoordinate;
varying vec2 threeStepsPositiveTextureCoordinate;
varying vec2 threeStepsNegativeTextureCoordinate;
varying vec2 fourStepsPositiveTextureCoordinate;
varying vec2 fourStepsNegativeTextureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
float centerIntensity = texture2D(inputImageTexture, centerTextureCoordinate).r;
float oneStepPositiveIntensity = texture2D(inputImageTexture, oneStepPositiveTextureCoordinate).r;
float oneStepNegativeIntensity = texture2D(inputImageTexture, oneStepNegativeTextureCoordinate).r;
float twoStepsPositiveIntensity = texture2D(inputImageTexture, twoStepsPositiveTextureCoordinate).r;
float twoStepsNegativeIntensity = texture2D(inputImageTexture, twoStepsNegativeTextureCoordinate).r;
float threeStepsPositiveIntensity = texture2D(inputImageTexture, threeStepsPositiveTextureCoordinate).r;
float threeStepsNegativeIntensity = texture2D(inputImageTexture, threeStepsNegativeTextureCoordinate).r;
float fourStepsPositiveIntensity = texture2D(inputImageTexture, fourStepsPositiveTextureCoordinate).r;
float fourStepsNegativeIntensity = texture2D(inputImageTexture, fourStepsNegativeTextureCoordinate).r;
lowp float maxValue = max(centerIntensity, oneStepPositiveIntensity);
maxValue = max(maxValue, oneStepNegativeIntensity);
maxValue = max(maxValue, twoStepsPositiveIntensity);
maxValue = max(maxValue, twoStepsNegativeIntensity);
maxValue = max(maxValue, threeStepsPositiveIntensity);
maxValue = max(maxValue, threeStepsNegativeIntensity);
maxValue = max(maxValue, fourStepsPositiveIntensity);
maxValue = max(maxValue, fourStepsNegativeIntensity);
gl_FragColor = vec4(vec3(maxValue), 1.0);
}
);
#else
NSString *const kGPUImageDilationRadiusOneFragmentShaderString = SHADER_STRING
(
varying vec2 centerTextureCoordinate;
varying vec2 oneStepPositiveTextureCoordinate;
varying vec2 oneStepNegativeTextureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
float centerIntensity = texture2D(inputImageTexture, centerTextureCoordinate).r;
float oneStepPositiveIntensity = texture2D(inputImageTexture, oneStepPositiveTextureCoordinate).r;
float oneStepNegativeIntensity = texture2D(inputImageTexture, oneStepNegativeTextureCoordinate).r;
float maxValue = max(centerIntensity, oneStepPositiveIntensity);
maxValue = max(maxValue, oneStepNegativeIntensity);
gl_FragColor = vec4(vec3(maxValue), 1.0);
}
);
NSString *const kGPUImageDilationRadiusTwoFragmentShaderString = SHADER_STRING
(
varying vec2 centerTextureCoordinate;
varying vec2 oneStepPositiveTextureCoordinate;
varying vec2 oneStepNegativeTextureCoordinate;
varying vec2 twoStepsPositiveTextureCoordinate;
varying vec2 twoStepsNegativeTextureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
float centerIntensity = texture2D(inputImageTexture, centerTextureCoordinate).r;
float oneStepPositiveIntensity = texture2D(inputImageTexture, oneStepPositiveTextureCoordinate).r;
float oneStepNegativeIntensity = texture2D(inputImageTexture, oneStepNegativeTextureCoordinate).r;
float twoStepsPositiveIntensity = texture2D(inputImageTexture, twoStepsPositiveTextureCoordinate).r;
float twoStepsNegativeIntensity = texture2D(inputImageTexture, twoStepsNegativeTextureCoordinate).r;
float maxValue = max(centerIntensity, oneStepPositiveIntensity);
maxValue = max(maxValue, oneStepNegativeIntensity);
maxValue = max(maxValue, twoStepsPositiveIntensity);
maxValue = max(maxValue, twoStepsNegativeIntensity);
gl_FragColor = vec4(vec3(maxValue), 1.0);
}
);
NSString *const kGPUImageDilationRadiusThreeFragmentShaderString = SHADER_STRING
(
varying vec2 centerTextureCoordinate;
varying vec2 oneStepPositiveTextureCoordinate;
varying vec2 oneStepNegativeTextureCoordinate;
varying vec2 twoStepsPositiveTextureCoordinate;
varying vec2 twoStepsNegativeTextureCoordinate;
varying vec2 threeStepsPositiveTextureCoordinate;
varying vec2 threeStepsNegativeTextureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
float centerIntensity = texture2D(inputImageTexture, centerTextureCoordinate).r;
float oneStepPositiveIntensity = texture2D(inputImageTexture, oneStepPositiveTextureCoordinate).r;
float oneStepNegativeIntensity = texture2D(inputImageTexture, oneStepNegativeTextureCoordinate).r;
float twoStepsPositiveIntensity = texture2D(inputImageTexture, twoStepsPositiveTextureCoordinate).r;
float twoStepsNegativeIntensity = texture2D(inputImageTexture, twoStepsNegativeTextureCoordinate).r;
float threeStepsPositiveIntensity = texture2D(inputImageTexture, threeStepsPositiveTextureCoordinate).r;
float threeStepsNegativeIntensity = texture2D(inputImageTexture, threeStepsNegativeTextureCoordinate).r;
float maxValue = max(centerIntensity, oneStepPositiveIntensity);
maxValue = max(maxValue, oneStepNegativeIntensity);
maxValue = max(maxValue, twoStepsPositiveIntensity);
maxValue = max(maxValue, twoStepsNegativeIntensity);
maxValue = max(maxValue, threeStepsPositiveIntensity);
maxValue = max(maxValue, threeStepsNegativeIntensity);
gl_FragColor = vec4(vec3(maxValue), 1.0);
}
);
NSString *const kGPUImageDilationRadiusFourFragmentShaderString = SHADER_STRING
(
varying vec2 centerTextureCoordinate;
varying vec2 oneStepPositiveTextureCoordinate;
varying vec2 oneStepNegativeTextureCoordinate;
varying vec2 twoStepsPositiveTextureCoordinate;
varying vec2 twoStepsNegativeTextureCoordinate;
varying vec2 threeStepsPositiveTextureCoordinate;
varying vec2 threeStepsNegativeTextureCoordinate;
varying vec2 fourStepsPositiveTextureCoordinate;
varying vec2 fourStepsNegativeTextureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
float centerIntensity = texture2D(inputImageTexture, centerTextureCoordinate).r;
float oneStepPositiveIntensity = texture2D(inputImageTexture, oneStepPositiveTextureCoordinate).r;
float oneStepNegativeIntensity = texture2D(inputImageTexture, oneStepNegativeTextureCoordinate).r;
float twoStepsPositiveIntensity = texture2D(inputImageTexture, twoStepsPositiveTextureCoordinate).r;
float twoStepsNegativeIntensity = texture2D(inputImageTexture, twoStepsNegativeTextureCoordinate).r;
float threeStepsPositiveIntensity = texture2D(inputImageTexture, threeStepsPositiveTextureCoordinate).r;
float threeStepsNegativeIntensity = texture2D(inputImageTexture, threeStepsNegativeTextureCoordinate).r;
float fourStepsPositiveIntensity = texture2D(inputImageTexture, fourStepsPositiveTextureCoordinate).r;
float fourStepsNegativeIntensity = texture2D(inputImageTexture, fourStepsNegativeTextureCoordinate).r;
float maxValue = max(centerIntensity, oneStepPositiveIntensity);
maxValue = max(maxValue, oneStepNegativeIntensity);
maxValue = max(maxValue, twoStepsPositiveIntensity);
maxValue = max(maxValue, twoStepsNegativeIntensity);
maxValue = max(maxValue, threeStepsPositiveIntensity);
maxValue = max(maxValue, threeStepsNegativeIntensity);
maxValue = max(maxValue, fourStepsPositiveIntensity);
maxValue = max(maxValue, fourStepsNegativeIntensity);
gl_FragColor = vec4(vec3(maxValue), 1.0);
}
);
#endif
#pragma mark -
#pragma mark Initialization and teardown
- (id)initWithRadius:(NSUInteger)dilationRadius;
{
NSString *fragmentShaderForThisRadius = nil;
NSString *vertexShaderForThisRadius = nil;
switch (dilationRadius)
{
case 0:
case 1:
{
vertexShaderForThisRadius = kGPUImageDilationRadiusOneVertexShaderString;
fragmentShaderForThisRadius = kGPUImageDilationRadiusOneFragmentShaderString;
}; break;
case 2:
{
vertexShaderForThisRadius = kGPUImageDilationRadiusTwoVertexShaderString;
fragmentShaderForThisRadius = kGPUImageDilationRadiusTwoFragmentShaderString;
}; break;
case 3:
{
vertexShaderForThisRadius = kGPUImageDilationRadiusThreeVertexShaderString;
fragmentShaderForThisRadius = kGPUImageDilationRadiusThreeFragmentShaderString;
}; break;
case 4:
{
vertexShaderForThisRadius = kGPUImageDilationRadiusFourVertexShaderString;
fragmentShaderForThisRadius = kGPUImageDilationRadiusFourFragmentShaderString;
}; break;
default:
{
vertexShaderForThisRadius = kGPUImageDilationRadiusFourVertexShaderString;
fragmentShaderForThisRadius = kGPUImageDilationRadiusFourFragmentShaderString;
}; break;
}
if (!(self = [super initWithFirstStageVertexShaderFromString:vertexShaderForThisRadius firstStageFragmentShaderFromString:fragmentShaderForThisRadius secondStageVertexShaderFromString:vertexShaderForThisRadius secondStageFragmentShaderFromString:fragmentShaderForThisRadius]))
{
return nil;
}
return self;
}
- (id)init;
{
if (!(self = [self initWithRadius:1]))
{
return nil;
}
return self;
}
@end

View File

@ -0,0 +1,19 @@
#import "GPUImageFilter.h"
@interface GPUImageDirectionalNonMaximumSuppressionFilter : GPUImageFilter
{
GLint texelWidthUniform, texelHeightUniform;
GLint upperThresholdUniform, lowerThresholdUniform;
BOOL hasOverriddenImageSizeFactor;
}
// The texel width and height determines how far out to sample from this texel. By default, this is the normalized width of a pixel, but this can be overridden for different effects.
@property(readwrite, nonatomic) CGFloat texelWidth;
@property(readwrite, nonatomic) CGFloat texelHeight;
// These thresholds set cutoffs for the intensities that definitely get registered (upper threshold) and those that definitely don't (lower threshold)
@property(readwrite, nonatomic) CGFloat upperThreshold;
@property(readwrite, nonatomic) CGFloat lowerThreshold;
@end

View File

@ -0,0 +1,141 @@
#import "GPUImageDirectionalNonMaximumSuppressionFilter.h"
@implementation GPUImageDirectionalNonMaximumSuppressionFilter
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUImageDirectionalNonmaximumSuppressionFragmentShaderString = SHADER_STRING
(
precision mediump float;
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform highp float texelWidth;
uniform highp float texelHeight;
uniform mediump float upperThreshold;
uniform mediump float lowerThreshold;
void main()
{
vec3 currentGradientAndDirection = texture2D(inputImageTexture, textureCoordinate).rgb;
vec2 gradientDirection = ((currentGradientAndDirection.gb * 2.0) - 1.0) * vec2(texelWidth, texelHeight);
float firstSampledGradientMagnitude = texture2D(inputImageTexture, textureCoordinate + gradientDirection).r;
float secondSampledGradientMagnitude = texture2D(inputImageTexture, textureCoordinate - gradientDirection).r;
float multiplier = step(firstSampledGradientMagnitude, currentGradientAndDirection.r);
multiplier = multiplier * step(secondSampledGradientMagnitude, currentGradientAndDirection.r);
float thresholdCompliance = smoothstep(lowerThreshold, upperThreshold, currentGradientAndDirection.r);
multiplier = multiplier * thresholdCompliance;
gl_FragColor = vec4(multiplier, multiplier, multiplier, 1.0);
}
);
#else
NSString *const kGPUImageDirectionalNonmaximumSuppressionFragmentShaderString = SHADER_STRING
(
varying vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform float texelWidth;
uniform float texelHeight;
uniform float upperThreshold;
uniform float lowerThreshold;
void main()
{
vec3 currentGradientAndDirection = texture2D(inputImageTexture, textureCoordinate).rgb;
vec2 gradientDirection = ((currentGradientAndDirection.gb * 2.0) - 1.0) * vec2(texelWidth, texelHeight);
float firstSampledGradientMagnitude = texture2D(inputImageTexture, textureCoordinate + gradientDirection).r;
float secondSampledGradientMagnitude = texture2D(inputImageTexture, textureCoordinate - gradientDirection).r;
float multiplier = step(firstSampledGradientMagnitude, currentGradientAndDirection.r);
multiplier = multiplier * step(secondSampledGradientMagnitude, currentGradientAndDirection.r);
float thresholdCompliance = smoothstep(lowerThreshold, upperThreshold, currentGradientAndDirection.r);
multiplier = multiplier * thresholdCompliance;
gl_FragColor = vec4(multiplier, multiplier, multiplier, 1.0);
}
);
#endif
@synthesize texelWidth = _texelWidth;
@synthesize texelHeight = _texelHeight;
@synthesize upperThreshold = _upperThreshold;
@synthesize lowerThreshold = _lowerThreshold;
#pragma mark -
#pragma mark Initialization and teardown
- (id)init;
{
if (!(self = [super initWithFragmentShaderFromString:kGPUImageDirectionalNonmaximumSuppressionFragmentShaderString]))
{
return nil;
}
texelWidthUniform = [filterProgram uniformIndex:@"texelWidth"];
texelHeightUniform = [filterProgram uniformIndex:@"texelHeight"];
upperThresholdUniform = [filterProgram uniformIndex:@"upperThreshold"];
lowerThresholdUniform = [filterProgram uniformIndex:@"lowerThreshold"];
self.upperThreshold = 0.5;
self.lowerThreshold = 0.1;
return self;
}
- (void)setupFilterForSize:(CGSize)filterFrameSize;
{
if (!hasOverriddenImageSizeFactor)
{
_texelWidth = 1.0 / filterFrameSize.width;
_texelHeight = 1.0 / filterFrameSize.height;
runSynchronouslyOnVideoProcessingQueue(^{
[GPUImageContext setActiveShaderProgram:filterProgram];
glUniform1f(texelWidthUniform, _texelWidth);
glUniform1f(texelHeightUniform, _texelHeight);
});
}
}
#pragma mark -
#pragma mark Accessors
- (void)setTexelWidth:(CGFloat)newValue;
{
hasOverriddenImageSizeFactor = YES;
_texelWidth = newValue;
[self setFloat:_texelWidth forUniform:texelWidthUniform program:filterProgram];
}
- (void)setTexelHeight:(CGFloat)newValue;
{
hasOverriddenImageSizeFactor = YES;
_texelHeight = newValue;
[self setFloat:_texelHeight forUniform:texelHeightUniform program:filterProgram];
}
- (void)setLowerThreshold:(CGFloat)newValue;
{
_lowerThreshold = newValue;
[self setFloat:_lowerThreshold forUniform:lowerThresholdUniform program:filterProgram];
}
- (void)setUpperThreshold:(CGFloat)newValue;
{
_upperThreshold = newValue;
[self setFloat:_upperThreshold forUniform:upperThresholdUniform program:filterProgram];
}
@end

View File

@ -0,0 +1,5 @@
#import "GPUImage3x3TextureSamplingFilter.h"
@interface GPUImageDirectionalSobelEdgeDetectionFilter : GPUImage3x3TextureSamplingFilter
@end

View File

@ -0,0 +1,103 @@
#import "GPUImageDirectionalSobelEdgeDetectionFilter.h"
@implementation GPUImageDirectionalSobelEdgeDetectionFilter
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUImageDirectionalSobelEdgeDetectionFragmentShaderString = SHADER_STRING
(
precision mediump float;
varying vec2 textureCoordinate;
varying vec2 leftTextureCoordinate;
varying vec2 rightTextureCoordinate;
varying vec2 topTextureCoordinate;
varying vec2 topLeftTextureCoordinate;
varying vec2 topRightTextureCoordinate;
varying vec2 bottomTextureCoordinate;
varying vec2 bottomLeftTextureCoordinate;
varying vec2 bottomRightTextureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
float bottomLeftIntensity = texture2D(inputImageTexture, bottomLeftTextureCoordinate).r;
float topRightIntensity = texture2D(inputImageTexture, topRightTextureCoordinate).r;
float topLeftIntensity = texture2D(inputImageTexture, topLeftTextureCoordinate).r;
float bottomRightIntensity = texture2D(inputImageTexture, bottomRightTextureCoordinate).r;
float leftIntensity = texture2D(inputImageTexture, leftTextureCoordinate).r;
float rightIntensity = texture2D(inputImageTexture, rightTextureCoordinate).r;
float bottomIntensity = texture2D(inputImageTexture, bottomTextureCoordinate).r;
float topIntensity = texture2D(inputImageTexture, topTextureCoordinate).r;
vec2 gradientDirection;
gradientDirection.x = -bottomLeftIntensity - 2.0 * leftIntensity - topLeftIntensity + bottomRightIntensity + 2.0 * rightIntensity + topRightIntensity;
gradientDirection.y = -topLeftIntensity - 2.0 * topIntensity - topRightIntensity + bottomLeftIntensity + 2.0 * bottomIntensity + bottomRightIntensity;
float gradientMagnitude = length(gradientDirection);
vec2 normalizedDirection = normalize(gradientDirection);
normalizedDirection = sign(normalizedDirection) * floor(abs(normalizedDirection) + 0.617316); // Offset by 1-sin(pi/8) to set to 0 if near axis, 1 if away
normalizedDirection = (normalizedDirection + 1.0) * 0.5; // Place -1.0 - 1.0 within 0 - 1.0
gl_FragColor = vec4(gradientMagnitude, normalizedDirection.x, normalizedDirection.y, 1.0);
}
);
#else
NSString *const kGPUImageDirectionalSobelEdgeDetectionFragmentShaderString = SHADER_STRING
(
varying vec2 textureCoordinate;
varying vec2 leftTextureCoordinate;
varying vec2 rightTextureCoordinate;
varying vec2 topTextureCoordinate;
varying vec2 topLeftTextureCoordinate;
varying vec2 topRightTextureCoordinate;
varying vec2 bottomTextureCoordinate;
varying vec2 bottomLeftTextureCoordinate;
varying vec2 bottomRightTextureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
float bottomLeftIntensity = texture2D(inputImageTexture, bottomLeftTextureCoordinate).r;
float topRightIntensity = texture2D(inputImageTexture, topRightTextureCoordinate).r;
float topLeftIntensity = texture2D(inputImageTexture, topLeftTextureCoordinate).r;
float bottomRightIntensity = texture2D(inputImageTexture, bottomRightTextureCoordinate).r;
float leftIntensity = texture2D(inputImageTexture, leftTextureCoordinate).r;
float rightIntensity = texture2D(inputImageTexture, rightTextureCoordinate).r;
float bottomIntensity = texture2D(inputImageTexture, bottomTextureCoordinate).r;
float topIntensity = texture2D(inputImageTexture, topTextureCoordinate).r;
vec2 gradientDirection;
gradientDirection.x = -bottomLeftIntensity - 2.0 * leftIntensity - topLeftIntensity + bottomRightIntensity + 2.0 * rightIntensity + topRightIntensity;
gradientDirection.y = -topLeftIntensity - 2.0 * topIntensity - topRightIntensity + bottomLeftIntensity + 2.0 * bottomIntensity + bottomRightIntensity;
float gradientMagnitude = length(gradientDirection);
vec2 normalizedDirection = normalize(gradientDirection);
normalizedDirection = sign(normalizedDirection) * floor(abs(normalizedDirection) + 0.617316); // Offset by 1-sin(pi/8) to set to 0 if near axis, 1 if away
normalizedDirection = (normalizedDirection + 1.0) * 0.5; // Place -1.0 - 1.0 within 0 - 1.0
gl_FragColor = vec4(gradientMagnitude, normalizedDirection.x, normalizedDirection.y, 1.0);
}
);
#endif
#pragma mark -
#pragma mark Initialization and teardown
- (id)init;
{
if (!(self = [super initWithFragmentShaderFromString:kGPUImageDirectionalSobelEdgeDetectionFragmentShaderString]))
{
return nil;
}
return self;
}
@end

View File

@ -0,0 +1,11 @@
#import "GPUImageTwoInputFilter.h"
@interface GPUImageDissolveBlendFilter : GPUImageTwoInputFilter
{
GLint mixUniform;
}
// Mix ranges from 0.0 (only image 1) to 1.0 (only image 2), with 0.5 (half of either) as the normal level
@property(readwrite, nonatomic) CGFloat mix;
@end

View File

@ -0,0 +1,72 @@
#import "GPUImageDissolveBlendFilter.h"
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUImageDissolveBlendFragmentShaderString = SHADER_STRING
(
varying highp vec2 textureCoordinate;
varying highp vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
uniform lowp float mixturePercent;
void main()
{
lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
lowp vec4 textureColor2 = texture2D(inputImageTexture2, textureCoordinate2);
gl_FragColor = mix(textureColor, textureColor2, mixturePercent);
}
);
#else
NSString *const kGPUImageDissolveBlendFragmentShaderString = SHADER_STRING
(
varying vec2 textureCoordinate;
varying vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
uniform float mixturePercent;
void main()
{
vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
vec4 textureColor2 = texture2D(inputImageTexture2, textureCoordinate2);
gl_FragColor = mix(textureColor, textureColor2, mixturePercent);
}
);
#endif
@implementation GPUImageDissolveBlendFilter
@synthesize mix = _mix;
#pragma mark -
#pragma mark Initialization and teardown
- (id)init;
{
if (!(self = [super initWithFragmentShaderFromString:kGPUImageDissolveBlendFragmentShaderString]))
{
return nil;
}
mixUniform = [filterProgram uniformIndex:@"mixturePercent"];
self.mix = 0.5;
return self;
}
#pragma mark -
#pragma mark Accessors
- (void)setMix:(CGFloat)newValue;
{
_mix = newValue;
[self setFloat:_mix forUniform:mixUniform program:filterProgram];
}
@end

View File

@ -0,0 +1,5 @@
#import "GPUImageTwoInputFilter.h"
@interface GPUImageDivideBlendFilter : GPUImageTwoInputFilter
@end

View File

@ -0,0 +1,96 @@
#import "GPUImageDivideBlendFilter.h"
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUImageDivideBlendFragmentShaderString = SHADER_STRING
(
varying highp vec2 textureCoordinate;
varying highp vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
void main()
{
mediump vec4 base = texture2D(inputImageTexture, textureCoordinate);
mediump vec4 overlay = texture2D(inputImageTexture2, textureCoordinate2);
mediump float ra;
if (overlay.a == 0.0 || ((base.r / overlay.r) > (base.a / overlay.a)))
ra = overlay.a * base.a + overlay.r * (1.0 - base.a) + base.r * (1.0 - overlay.a);
else
ra = (base.r * overlay.a * overlay.a) / overlay.r + overlay.r * (1.0 - base.a) + base.r * (1.0 - overlay.a);
mediump float ga;
if (overlay.a == 0.0 || ((base.g / overlay.g) > (base.a / overlay.a)))
ga = overlay.a * base.a + overlay.g * (1.0 - base.a) + base.g * (1.0 - overlay.a);
else
ga = (base.g * overlay.a * overlay.a) / overlay.g + overlay.g * (1.0 - base.a) + base.g * (1.0 - overlay.a);
mediump float ba;
if (overlay.a == 0.0 || ((base.b / overlay.b) > (base.a / overlay.a)))
ba = overlay.a * base.a + overlay.b * (1.0 - base.a) + base.b * (1.0 - overlay.a);
else
ba = (base.b * overlay.a * overlay.a) / overlay.b + overlay.b * (1.0 - base.a) + base.b * (1.0 - overlay.a);
mediump float a = overlay.a + base.a - overlay.a * base.a;
gl_FragColor = vec4(ra, ga, ba, a);
}
);
#else
NSString *const kGPUImageDivideBlendFragmentShaderString = SHADER_STRING
(
varying vec2 textureCoordinate;
varying vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
void main()
{
vec4 base = texture2D(inputImageTexture, textureCoordinate);
vec4 overlay = texture2D(inputImageTexture2, textureCoordinate2);
float ra;
if (overlay.a == 0.0 || ((base.r / overlay.r) > (base.a / overlay.a)))
ra = overlay.a * base.a + overlay.r * (1.0 - base.a) + base.r * (1.0 - overlay.a);
else
ra = (base.r * overlay.a * overlay.a) / overlay.r + overlay.r * (1.0 - base.a) + base.r * (1.0 - overlay.a);
float ga;
if (overlay.a == 0.0 || ((base.g / overlay.g) > (base.a / overlay.a)))
ga = overlay.a * base.a + overlay.g * (1.0 - base.a) + base.g * (1.0 - overlay.a);
else
ga = (base.g * overlay.a * overlay.a) / overlay.g + overlay.g * (1.0 - base.a) + base.g * (1.0 - overlay.a);
float ba;
if (overlay.a == 0.0 || ((base.b / overlay.b) > (base.a / overlay.a)))
ba = overlay.a * base.a + overlay.b * (1.0 - base.a) + base.b * (1.0 - overlay.a);
else
ba = (base.b * overlay.a * overlay.a) / overlay.b + overlay.b * (1.0 - base.a) + base.b * (1.0 - overlay.a);
float a = overlay.a + base.a - overlay.a * base.a;
gl_FragColor = vec4(ra, ga, ba, a);
}
);
#endif
@implementation GPUImageDivideBlendFilter
- (id)init;
{
if (!(self = [super initWithFragmentShaderFromString:kGPUImageDivideBlendFragmentShaderString]))
{
return nil;
}
return self;
}
@end

View File

@ -0,0 +1,8 @@
#import "GPUImage3x3ConvolutionFilter.h"
@interface GPUImageEmbossFilter : GPUImage3x3ConvolutionFilter
// The strength of the embossing, from 0.0 to 4.0, with 1.0 as the normal level
@property(readwrite, nonatomic) CGFloat intensity;
@end

View File

@ -0,0 +1,49 @@
#import "GPUImageEmbossFilter.h"
@implementation GPUImageEmbossFilter
@synthesize intensity = _intensity;
- (id)init;
{
if (!(self = [super init]))
{
return nil;
}
self.intensity = 1.0;
return self;
}
#pragma mark -
#pragma mark Accessors
- (void)setIntensity:(CGFloat)newValue;
{
// [(GPUImage3x3ConvolutionFilter *)filter setConvolutionMatrix:(GPUMatrix3x3){
// {-2.0f, -1.0f, 0.0f},
// {-1.0f, 1.0f, 1.0f},
// { 0.0f, 1.0f, 2.0f}
// }];
_intensity = newValue;
GPUMatrix3x3 newConvolutionMatrix;
newConvolutionMatrix.one.one = _intensity * (-2.0);
newConvolutionMatrix.one.two = -_intensity;
newConvolutionMatrix.one.three = 0.0f;
newConvolutionMatrix.two.one = -_intensity;
newConvolutionMatrix.two.two = 1.0;
newConvolutionMatrix.two.three = _intensity;
newConvolutionMatrix.three.one = 0.0f;
newConvolutionMatrix.three.two = _intensity;
newConvolutionMatrix.three.three = _intensity * 2.0;
self.convolutionKernel = newConvolutionMatrix;
}
@end

View File

@ -0,0 +1,11 @@
#import "GPUImageTwoPassTextureSamplingFilter.h"
// For each pixel, this sets it to the minimum value of the red channel in a rectangular neighborhood extending out dilationRadius pixels from the center.
// This extends out dark features, and is most commonly used with black-and-white thresholded images.
@interface GPUImageErosionFilter : GPUImageTwoPassTextureSamplingFilter
// Acceptable values for erosionRadius, which sets the distance in pixels to sample out from the center, are 1, 2, 3, and 4.
- (id)initWithRadius:(NSUInteger)erosionRadius;
@end

View File

@ -0,0 +1,312 @@
#import "GPUImageErosionFilter.h"
#import "GPUImageDilationFilter.h"
@implementation GPUImageErosionFilter
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUImageErosionRadiusOneFragmentShaderString = SHADER_STRING
(
precision lowp float;
varying vec2 centerTextureCoordinate;
varying vec2 oneStepPositiveTextureCoordinate;
varying vec2 oneStepNegativeTextureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
float centerIntensity = texture2D(inputImageTexture, centerTextureCoordinate).r;
float oneStepPositiveIntensity = texture2D(inputImageTexture, oneStepPositiveTextureCoordinate).r;
float oneStepNegativeIntensity = texture2D(inputImageTexture, oneStepNegativeTextureCoordinate).r;
lowp float minValue = min(centerIntensity, oneStepPositiveIntensity);
minValue = min(minValue, oneStepNegativeIntensity);
gl_FragColor = vec4(vec3(minValue), 1.0);
}
);
NSString *const kGPUImageErosionRadiusTwoFragmentShaderString = SHADER_STRING
(
precision lowp float;
varying vec2 centerTextureCoordinate;
varying vec2 oneStepPositiveTextureCoordinate;
varying vec2 oneStepNegativeTextureCoordinate;
varying vec2 twoStepsPositiveTextureCoordinate;
varying vec2 twoStepsNegativeTextureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
float centerIntensity = texture2D(inputImageTexture, centerTextureCoordinate).r;
float oneStepPositiveIntensity = texture2D(inputImageTexture, oneStepPositiveTextureCoordinate).r;
float oneStepNegativeIntensity = texture2D(inputImageTexture, oneStepNegativeTextureCoordinate).r;
float twoStepsPositiveIntensity = texture2D(inputImageTexture, twoStepsPositiveTextureCoordinate).r;
float twoStepsNegativeIntensity = texture2D(inputImageTexture, twoStepsNegativeTextureCoordinate).r;
lowp float minValue = min(centerIntensity, oneStepPositiveIntensity);
minValue = min(minValue, oneStepNegativeIntensity);
minValue = min(minValue, twoStepsPositiveIntensity);
minValue = min(minValue, twoStepsNegativeIntensity);
gl_FragColor = vec4(vec3(minValue), 1.0);
}
);
NSString *const kGPUImageErosionRadiusThreeFragmentShaderString = SHADER_STRING
(
precision lowp float;
varying vec2 centerTextureCoordinate;
varying vec2 oneStepPositiveTextureCoordinate;
varying vec2 oneStepNegativeTextureCoordinate;
varying vec2 twoStepsPositiveTextureCoordinate;
varying vec2 twoStepsNegativeTextureCoordinate;
varying vec2 threeStepsPositiveTextureCoordinate;
varying vec2 threeStepsNegativeTextureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
float centerIntensity = texture2D(inputImageTexture, centerTextureCoordinate).r;
float oneStepPositiveIntensity = texture2D(inputImageTexture, oneStepPositiveTextureCoordinate).r;
float oneStepNegativeIntensity = texture2D(inputImageTexture, oneStepNegativeTextureCoordinate).r;
float twoStepsPositiveIntensity = texture2D(inputImageTexture, twoStepsPositiveTextureCoordinate).r;
float twoStepsNegativeIntensity = texture2D(inputImageTexture, twoStepsNegativeTextureCoordinate).r;
float threeStepsPositiveIntensity = texture2D(inputImageTexture, threeStepsPositiveTextureCoordinate).r;
float threeStepsNegativeIntensity = texture2D(inputImageTexture, threeStepsNegativeTextureCoordinate).r;
lowp float minValue = min(centerIntensity, oneStepPositiveIntensity);
minValue = min(minValue, oneStepNegativeIntensity);
minValue = min(minValue, twoStepsPositiveIntensity);
minValue = min(minValue, twoStepsNegativeIntensity);
minValue = min(minValue, threeStepsPositiveIntensity);
minValue = min(minValue, threeStepsNegativeIntensity);
gl_FragColor = vec4(vec3(minValue), 1.0);
}
);
NSString *const kGPUImageErosionRadiusFourFragmentShaderString = SHADER_STRING
(
precision lowp float;
varying vec2 centerTextureCoordinate;
varying vec2 oneStepPositiveTextureCoordinate;
varying vec2 oneStepNegativeTextureCoordinate;
varying vec2 twoStepsPositiveTextureCoordinate;
varying vec2 twoStepsNegativeTextureCoordinate;
varying vec2 threeStepsPositiveTextureCoordinate;
varying vec2 threeStepsNegativeTextureCoordinate;
varying vec2 fourStepsPositiveTextureCoordinate;
varying vec2 fourStepsNegativeTextureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
float centerIntensity = texture2D(inputImageTexture, centerTextureCoordinate).r;
float oneStepPositiveIntensity = texture2D(inputImageTexture, oneStepPositiveTextureCoordinate).r;
float oneStepNegativeIntensity = texture2D(inputImageTexture, oneStepNegativeTextureCoordinate).r;
float twoStepsPositiveIntensity = texture2D(inputImageTexture, twoStepsPositiveTextureCoordinate).r;
float twoStepsNegativeIntensity = texture2D(inputImageTexture, twoStepsNegativeTextureCoordinate).r;
float threeStepsPositiveIntensity = texture2D(inputImageTexture, threeStepsPositiveTextureCoordinate).r;
float threeStepsNegativeIntensity = texture2D(inputImageTexture, threeStepsNegativeTextureCoordinate).r;
float fourStepsPositiveIntensity = texture2D(inputImageTexture, fourStepsPositiveTextureCoordinate).r;
float fourStepsNegativeIntensity = texture2D(inputImageTexture, fourStepsNegativeTextureCoordinate).r;
lowp float minValue = min(centerIntensity, oneStepPositiveIntensity);
minValue = min(minValue, oneStepNegativeIntensity);
minValue = min(minValue, twoStepsPositiveIntensity);
minValue = min(minValue, twoStepsNegativeIntensity);
minValue = min(minValue, threeStepsPositiveIntensity);
minValue = min(minValue, threeStepsNegativeIntensity);
minValue = min(minValue, fourStepsPositiveIntensity);
minValue = min(minValue, fourStepsNegativeIntensity);
gl_FragColor = vec4(vec3(minValue), 1.0);
}
);
#else
NSString *const kGPUImageErosionRadiusOneFragmentShaderString = SHADER_STRING
(
varying vec2 centerTextureCoordinate;
varying vec2 oneStepPositiveTextureCoordinate;
varying vec2 oneStepNegativeTextureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
float centerIntensity = texture2D(inputImageTexture, centerTextureCoordinate).r;
float oneStepPositiveIntensity = texture2D(inputImageTexture, oneStepPositiveTextureCoordinate).r;
float oneStepNegativeIntensity = texture2D(inputImageTexture, oneStepNegativeTextureCoordinate).r;
float minValue = min(centerIntensity, oneStepPositiveIntensity);
minValue = min(minValue, oneStepNegativeIntensity);
gl_FragColor = vec4(vec3(minValue), 1.0);
}
);
NSString *const kGPUImageErosionRadiusTwoFragmentShaderString = SHADER_STRING
(
varying vec2 centerTextureCoordinate;
varying vec2 oneStepPositiveTextureCoordinate;
varying vec2 oneStepNegativeTextureCoordinate;
varying vec2 twoStepsPositiveTextureCoordinate;
varying vec2 twoStepsNegativeTextureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
float centerIntensity = texture2D(inputImageTexture, centerTextureCoordinate).r;
float oneStepPositiveIntensity = texture2D(inputImageTexture, oneStepPositiveTextureCoordinate).r;
float oneStepNegativeIntensity = texture2D(inputImageTexture, oneStepNegativeTextureCoordinate).r;
float twoStepsPositiveIntensity = texture2D(inputImageTexture, twoStepsPositiveTextureCoordinate).r;
float twoStepsNegativeIntensity = texture2D(inputImageTexture, twoStepsNegativeTextureCoordinate).r;
float minValue = min(centerIntensity, oneStepPositiveIntensity);
minValue = min(minValue, oneStepNegativeIntensity);
minValue = min(minValue, twoStepsPositiveIntensity);
minValue = min(minValue, twoStepsNegativeIntensity);
gl_FragColor = vec4(vec3(minValue), 1.0);
}
);
NSString *const kGPUImageErosionRadiusThreeFragmentShaderString = SHADER_STRING
(
varying vec2 centerTextureCoordinate;
varying vec2 oneStepPositiveTextureCoordinate;
varying vec2 oneStepNegativeTextureCoordinate;
varying vec2 twoStepsPositiveTextureCoordinate;
varying vec2 twoStepsNegativeTextureCoordinate;
varying vec2 threeStepsPositiveTextureCoordinate;
varying vec2 threeStepsNegativeTextureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
float centerIntensity = texture2D(inputImageTexture, centerTextureCoordinate).r;
float oneStepPositiveIntensity = texture2D(inputImageTexture, oneStepPositiveTextureCoordinate).r;
float oneStepNegativeIntensity = texture2D(inputImageTexture, oneStepNegativeTextureCoordinate).r;
float twoStepsPositiveIntensity = texture2D(inputImageTexture, twoStepsPositiveTextureCoordinate).r;
float twoStepsNegativeIntensity = texture2D(inputImageTexture, twoStepsNegativeTextureCoordinate).r;
float threeStepsPositiveIntensity = texture2D(inputImageTexture, threeStepsPositiveTextureCoordinate).r;
float threeStepsNegativeIntensity = texture2D(inputImageTexture, threeStepsNegativeTextureCoordinate).r;
float minValue = min(centerIntensity, oneStepPositiveIntensity);
minValue = min(minValue, oneStepNegativeIntensity);
minValue = min(minValue, twoStepsPositiveIntensity);
minValue = min(minValue, twoStepsNegativeIntensity);
minValue = min(minValue, threeStepsPositiveIntensity);
minValue = min(minValue, threeStepsNegativeIntensity);
gl_FragColor = vec4(vec3(minValue), 1.0);
}
);
NSString *const kGPUImageErosionRadiusFourFragmentShaderString = SHADER_STRING
(
varying vec2 centerTextureCoordinate;
varying vec2 oneStepPositiveTextureCoordinate;
varying vec2 oneStepNegativeTextureCoordinate;
varying vec2 twoStepsPositiveTextureCoordinate;
varying vec2 twoStepsNegativeTextureCoordinate;
varying vec2 threeStepsPositiveTextureCoordinate;
varying vec2 threeStepsNegativeTextureCoordinate;
varying vec2 fourStepsPositiveTextureCoordinate;
varying vec2 fourStepsNegativeTextureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
float centerIntensity = texture2D(inputImageTexture, centerTextureCoordinate).r;
float oneStepPositiveIntensity = texture2D(inputImageTexture, oneStepPositiveTextureCoordinate).r;
float oneStepNegativeIntensity = texture2D(inputImageTexture, oneStepNegativeTextureCoordinate).r;
float twoStepsPositiveIntensity = texture2D(inputImageTexture, twoStepsPositiveTextureCoordinate).r;
float twoStepsNegativeIntensity = texture2D(inputImageTexture, twoStepsNegativeTextureCoordinate).r;
float threeStepsPositiveIntensity = texture2D(inputImageTexture, threeStepsPositiveTextureCoordinate).r;
float threeStepsNegativeIntensity = texture2D(inputImageTexture, threeStepsNegativeTextureCoordinate).r;
float fourStepsPositiveIntensity = texture2D(inputImageTexture, fourStepsPositiveTextureCoordinate).r;
float fourStepsNegativeIntensity = texture2D(inputImageTexture, fourStepsNegativeTextureCoordinate).r;
float minValue = min(centerIntensity, oneStepPositiveIntensity);
minValue = min(minValue, oneStepNegativeIntensity);
minValue = min(minValue, twoStepsPositiveIntensity);
minValue = min(minValue, twoStepsNegativeIntensity);
minValue = min(minValue, threeStepsPositiveIntensity);
minValue = min(minValue, threeStepsNegativeIntensity);
minValue = min(minValue, fourStepsPositiveIntensity);
minValue = min(minValue, fourStepsNegativeIntensity);
gl_FragColor = vec4(vec3(minValue), 1.0);
}
);
#endif
#pragma mark -
#pragma mark Initialization and teardown
- (id)initWithRadius:(NSUInteger)dilationRadius;
{
NSString *fragmentShaderForThisRadius = nil;
NSString *vertexShaderForThisRadius = nil;
switch (dilationRadius)
{
case 0:
case 1:
{
vertexShaderForThisRadius = kGPUImageDilationRadiusOneVertexShaderString;
fragmentShaderForThisRadius = kGPUImageErosionRadiusOneFragmentShaderString;
}; break;
case 2:
{
vertexShaderForThisRadius = kGPUImageDilationRadiusTwoVertexShaderString;
fragmentShaderForThisRadius = kGPUImageErosionRadiusTwoFragmentShaderString;
}; break;
case 3:
{
vertexShaderForThisRadius = kGPUImageDilationRadiusThreeVertexShaderString;
fragmentShaderForThisRadius = kGPUImageErosionRadiusThreeFragmentShaderString;
}; break;
case 4:
{
vertexShaderForThisRadius = kGPUImageDilationRadiusFourVertexShaderString;
fragmentShaderForThisRadius = kGPUImageErosionRadiusFourFragmentShaderString;
}; break;
default:
{
vertexShaderForThisRadius = kGPUImageDilationRadiusFourVertexShaderString;
fragmentShaderForThisRadius = kGPUImageErosionRadiusFourFragmentShaderString;
}; break;
}
if (!(self = [super initWithFirstStageVertexShaderFromString:vertexShaderForThisRadius firstStageFragmentShaderFromString:fragmentShaderForThisRadius secondStageVertexShaderFromString:vertexShaderForThisRadius secondStageFragmentShaderFromString:fragmentShaderForThisRadius]))
{
return nil;
}
return self;
}
- (id)init;
{
if (!(self = [self initWithRadius:1]))
{
return nil;
}
return self;
}
@end

View File

@ -0,0 +1,7 @@
#import "GPUImageTwoInputFilter.h"
@interface GPUImageExclusionBlendFilter : GPUImageTwoInputFilter
{
}
@end

View File

@ -0,0 +1,56 @@
#import "GPUImageExclusionBlendFilter.h"
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUImageExclusionBlendFragmentShaderString = SHADER_STRING
(
varying highp vec2 textureCoordinate;
varying highp vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
void main()
{
mediump vec4 base = texture2D(inputImageTexture, textureCoordinate);
mediump vec4 overlay = texture2D(inputImageTexture2, textureCoordinate2);
// Dca = (Sca.Da + Dca.Sa - 2.Sca.Dca) + Sca.(1 - Da) + Dca.(1 - Sa)
gl_FragColor = vec4((overlay.rgb * base.a + base.rgb * overlay.a - 2.0 * overlay.rgb * base.rgb) + overlay.rgb * (1.0 - base.a) + base.rgb * (1.0 - overlay.a), base.a);
}
);
#else
NSString *const kGPUImageExclusionBlendFragmentShaderString = SHADER_STRING
(
varying vec2 textureCoordinate;
varying vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
void main()
{
vec4 base = texture2D(inputImageTexture, textureCoordinate);
vec4 overlay = texture2D(inputImageTexture2, textureCoordinate2);
// Dca = (Sca.Da + Dca.Sa - 2.Sca.Dca) + Sca.(1 - Da) + Dca.(1 - Sa)
gl_FragColor = vec4((overlay.rgb * base.a + base.rgb * overlay.a - 2.0 * overlay.rgb * base.rgb) + overlay.rgb * (1.0 - base.a) + base.rgb * (1.0 - overlay.a), base.a);
}
);
#endif
@implementation GPUImageExclusionBlendFilter
- (id)init;
{
if (!(self = [super initWithFragmentShaderFromString:kGPUImageExclusionBlendFragmentShaderString]))
{
return nil;
}
return self;
}
@end

View File

@ -0,0 +1,11 @@
#import "GPUImageFilter.h"
@interface GPUImageExposureFilter : GPUImageFilter
{
GLint exposureUniform;
}
// Exposure ranges from -10.0 to 10.0, with 0.0 as the normal level
@property(readwrite, nonatomic) CGFloat exposure;
@end

View File

@ -0,0 +1,66 @@
#import "GPUImageExposureFilter.h"
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUImageExposureFragmentShaderString = SHADER_STRING
(
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform highp float exposure;
void main()
{
highp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
gl_FragColor = vec4(textureColor.rgb * pow(2.0, exposure), textureColor.w);
}
);
#else
NSString *const kGPUImageExposureFragmentShaderString = SHADER_STRING
(
varying vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform float exposure;
void main()
{
vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
gl_FragColor = vec4(textureColor.rgb * pow(2.0, exposure), textureColor.w);
}
);
#endif
@implementation GPUImageExposureFilter
@synthesize exposure = _exposure;
#pragma mark -
#pragma mark Initialization and teardown
- (id)init;
{
if (!(self = [super initWithFragmentShaderFromString:kGPUImageExposureFragmentShaderString]))
{
return nil;
}
exposureUniform = [filterProgram uniformIndex:@"exposure"];
self.exposure = 0.0;
return self;
}
#pragma mark -
#pragma mark Accessors
- (void)setExposure:(CGFloat)newValue;
{
_exposure = newValue;
[self setFloat:_exposure forUniform:exposureUniform program:filterProgram];
}
@end

View File

@ -0,0 +1,33 @@
#import "GPUImageFilterGroup.h"
@class GPUImageGrayscaleFilter;
@class GPUImage3x3TextureSamplingFilter;
@class GPUImageNonMaximumSuppressionFilter;
/*
An implementation of the Features from Accelerated Segment Test (FAST) feature detector as described in the following publications:
E. Rosten and T. Drummond. Fusing points and lines for high performance tracking. IEEE International Conference on Computer Vision, 2005.
E. Rosten and T. Drummond. Machine learning for high-speed corner detection. European Conference on Computer Vision, 2006.
For more about the FAST feature detector, see the resources here:
http://www.edwardrosten.com/work/fast.html
*/
typedef enum { kGPUImageFAST12Contiguous, kGPUImageFAST12ContiguousNonMaximumSuppressed} GPUImageFASTDetectorType;
@interface GPUImageFASTCornerDetectionFilter : GPUImageFilterGroup
{
GPUImageGrayscaleFilter *luminanceReductionFilter;
GPUImage3x3TextureSamplingFilter *featureDetectionFilter;
GPUImageNonMaximumSuppressionFilter *nonMaximumSuppressionFilter;
// Generate a lookup texture based on the bit patterns
// Step 1: convert to monochrome if necessary
// Step 2: do a lookup at each pixel based on the Bresenham circle, encode comparison in two color components
// Step 3: do non-maximum suppression of close corner points
}
- (id)initWithFASTDetectorVariant:(GPUImageFASTDetectorType)detectorType;
@end

View File

@ -0,0 +1,89 @@
#import "GPUImageFASTCornerDetectionFilter.h"
#import "GPUImageGrayscaleFilter.h"
#import "GPUImage3x3TextureSamplingFilter.h"
#import "GPUImageNonMaximumSuppressionFilter.h"
// 14 total texture coordinates from vertex shader for non-dependent reads
// 3 texture coordinates for dependent reads, then
NSString *const kGPUImageFASTDetectorFragmentShaderString = SHADER_STRING
(
precision highp float;
varying vec2 textureCoordinate;
varying vec2 leftTextureCoordinate;
varying vec2 rightTextureCoordinate;
varying vec2 topTextureCoordinate;
varying vec2 topLeftTextureCoordinate;
varying vec2 topRightTextureCoordinate;
varying vec2 bottomTextureCoordinate;
varying vec2 bottomLeftTextureCoordinate;
varying vec2 bottomRightTextureCoordinate;
uniform sampler2D inputImageTexture;
uniform sampler2D lookupTable;
void main()
{
lowp float centerIntensity = texture2D(inputImageTexture, textureCoordinate).r;
lowp float bottomLeftIntensity = texture2D(inputImageTexture, bottomLeftTextureCoordinate).r;
lowp float topRightIntensity = texture2D(inputImageTexture, topRightTextureCoordinate).r;
lowp float topLeftIntensity = texture2D(inputImageTexture, topLeftTextureCoordinate).r;
lowp float bottomRightIntensity = texture2D(inputImageTexture, bottomRightTextureCoordinate).r;
lowp float leftIntensity = texture2D(inputImageTexture, leftTextureCoordinate).r;
lowp float rightIntensity = texture2D(inputImageTexture, rightTextureCoordinate).r;
lowp float bottomIntensity = texture2D(inputImageTexture, bottomTextureCoordinate).r;
lowp float topIntensity = texture2D(inputImageTexture, topTextureCoordinate).r;
lowp float byteTally = 1.0 / 255.0 * step(centerIntensity, topRightIntensity);
byteTally += 2.0 / 255.0 * step(centerIntensity, topIntensity);
byteTally += 4.0 / 255.0 * step(centerIntensity, topLeftIntensity);
byteTally += 8.0 / 255.0 * step(centerIntensity, leftIntensity);
byteTally += 16.0 / 255.0 * step(centerIntensity, bottomLeftIntensity);
byteTally += 32.0 / 255.0 * step(centerIntensity, bottomIntensity);
byteTally += 64.0 / 255.0 * step(centerIntensity, bottomRightIntensity);
byteTally += 128.0 / 255.0 * step(centerIntensity, rightIntensity);
// TODO: Replace the above with a dot product and two vec4s
// TODO: Apply step to a matrix, rather than individually
gl_FragColor = vec4(byteTally, byteTally, byteTally, 1.0);
}
);
@implementation GPUImageFASTCornerDetectionFilter
- (id)init;
{
if (!(self = [self initWithFASTDetectorVariant:kGPUImageFAST12ContiguousNonMaximumSuppressed]))
{
return nil;
}
return self;
}
- (id)initWithFASTDetectorVariant:(GPUImageFASTDetectorType)detectorType;
{
if (!(self = [super init]))
{
return nil;
}
// [derivativeFilter addTarget:blurFilter];
// [blurFilter addTarget:harrisCornerDetectionFilter];
// [harrisCornerDetectionFilter addTarget:nonMaximumSuppressionFilter];
// [simpleThresholdFilter addTarget:colorPackingFilter];
// self.initialFilters = [NSArray arrayWithObjects:derivativeFilter, nil];
// self.terminalFilter = colorPackingFilter;
// self.terminalFilter = nonMaximumSuppressionFilter;
return self;
}
@end

View File

@ -0,0 +1,15 @@
#import "GPUImageFilter.h"
@interface GPUImageFalseColorFilter : GPUImageFilter
{
GLint firstColorUniform, secondColorUniform;
}
// The first and second colors specify what colors replace the dark and light areas of the image, respectively. The defaults are (0.0, 0.0, 0.5) amd (1.0, 0.0, 0.0).
@property(readwrite, nonatomic) GPUVector4 firstColor;
@property(readwrite, nonatomic) GPUVector4 secondColor;
- (void)setFirstColorRed:(GLfloat)redComponent green:(GLfloat)greenComponent blue:(GLfloat)blueComponent;
- (void)setSecondColorRed:(GLfloat)redComponent green:(GLfloat)greenComponent blue:(GLfloat)blueComponent;
@end

View File

@ -0,0 +1,101 @@
#import "GPUImageFalseColorFilter.h"
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUFalseColorFragmentShaderString = SHADER_STRING
(
precision lowp float;
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform float intensity;
uniform vec3 firstColor;
uniform vec3 secondColor;
const mediump vec3 luminanceWeighting = vec3(0.2125, 0.7154, 0.0721);
void main()
{
lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
float luminance = dot(textureColor.rgb, luminanceWeighting);
gl_FragColor = vec4( mix(firstColor.rgb, secondColor.rgb, luminance), textureColor.a);
}
);
#else
NSString *const kGPUFalseColorFragmentShaderString = SHADER_STRING
(
varying vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform float intensity;
uniform vec3 firstColor;
uniform vec3 secondColor;
const vec3 luminanceWeighting = vec3(0.2125, 0.7154, 0.0721);
void main()
{
vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
float luminance = dot(textureColor.rgb, luminanceWeighting);
gl_FragColor = vec4( mix(firstColor.rgb, secondColor.rgb, luminance), textureColor.a);
}
);
#endif
@implementation GPUImageFalseColorFilter
@synthesize secondColor = _secondColor;
@synthesize firstColor = _firstColor;
- (id)init;
{
if (!(self = [super initWithFragmentShaderFromString:kGPUFalseColorFragmentShaderString]))
{
return nil;
}
firstColorUniform = [filterProgram uniformIndex:@"firstColor"];
secondColorUniform = [filterProgram uniformIndex:@"secondColor"];
self.firstColor = (GPUVector4){0.0f, 0.0f, 0.5f, 1.0f};
self.secondColor = (GPUVector4){1.0f, 0.0f, 0.0f, 1.0f};
return self;
}
#pragma mark -
#pragma mark Accessors
- (void)setFirstColor:(GPUVector4)newValue;
{
_firstColor = newValue;
[self setFirstColorRed:_firstColor.one green:_firstColor.two blue:_firstColor.three];
}
- (void)setSecondColor:(GPUVector4)newValue;
{
_secondColor = newValue;
[self setSecondColorRed:_secondColor.one green:_secondColor.two blue:_secondColor.three];
}
- (void)setFirstColorRed:(GLfloat)redComponent green:(GLfloat)greenComponent blue:(GLfloat)blueComponent;
{
GPUVector3 firstColor = {redComponent, greenComponent, blueComponent};
[self setVec3:firstColor forUniform:firstColorUniform program:filterProgram];
}
- (void)setSecondColorRed:(GLfloat)redComponent green:(GLfloat)greenComponent blue:(GLfloat)blueComponent;
{
GPUVector3 secondColor = {redComponent, greenComponent, blueComponent};
[self setVec3:secondColor forUniform:secondColorUniform program:filterProgram];
}
@end

View File

@ -0,0 +1,134 @@
#import "GPUImageOutput.h"
#define STRINGIZE(x) #x
#define STRINGIZE2(x) STRINGIZE(x)
#define SHADER_STRING(text) @ STRINGIZE2(text)
#define GPUImageHashIdentifier #
#define GPUImageWrappedLabel(x) x
#define GPUImageEscapedHashIdentifier(a) GPUImageWrappedLabel(GPUImageHashIdentifier)a
extern NSString *const kGPUImageVertexShaderString;
extern NSString *const kGPUImagePassthroughFragmentShaderString;
struct GPUVector4 {
GLfloat one;
GLfloat two;
GLfloat three;
GLfloat four;
};
typedef struct GPUVector4 GPUVector4;
struct GPUVector3 {
GLfloat one;
GLfloat two;
GLfloat three;
};
typedef struct GPUVector3 GPUVector3;
struct GPUMatrix4x4 {
GPUVector4 one;
GPUVector4 two;
GPUVector4 three;
GPUVector4 four;
};
typedef struct GPUMatrix4x4 GPUMatrix4x4;
struct GPUMatrix3x3 {
GPUVector3 one;
GPUVector3 two;
GPUVector3 three;
};
typedef struct GPUMatrix3x3 GPUMatrix3x3;
/** GPUImage's base filter class
Filters and other subsequent elements in the chain conform to the GPUImageInput protocol, which lets them take in the supplied or processed texture from the previous link in the chain and do something with it. Objects one step further down the chain are considered targets, and processing can be branched by adding multiple targets to a single output or filter.
*/
@interface GPUImageFilter : GPUImageOutput <GPUImageInput>
{
GPUImageFramebuffer *firstInputFramebuffer;
GLProgram *filterProgram;
GLint filterPositionAttribute, filterTextureCoordinateAttribute;
GLint filterInputTextureUniform;
GLfloat backgroundColorRed, backgroundColorGreen, backgroundColorBlue, backgroundColorAlpha;
BOOL isEndProcessing;
CGSize currentFilterSize;
GPUImageRotationMode inputRotation;
BOOL currentlyReceivingMonochromeInput;
NSMutableDictionary *uniformStateRestorationBlocks;
dispatch_semaphore_t imageCaptureSemaphore;
}
@property(readonly) CVPixelBufferRef renderTarget;
@property(readwrite, nonatomic) BOOL preventRendering;
@property(readwrite, nonatomic) BOOL currentlyReceivingMonochromeInput;
/// @name Initialization and teardown
/**
Initialize with vertex and fragment shaders
You make take advantage of the SHADER_STRING macro to write your shaders in-line.
@param vertexShaderString Source code of the vertex shader to use
@param fragmentShaderString Source code of the fragment shader to use
*/
- (id)initWithVertexShaderFromString:(NSString *)vertexShaderString fragmentShaderFromString:(NSString *)fragmentShaderString;
/**
Initialize with a fragment shader
You may take advantage of the SHADER_STRING macro to write your shader in-line.
@param fragmentShaderString Source code of fragment shader to use
*/
- (id)initWithFragmentShaderFromString:(NSString *)fragmentShaderString;
/**
Initialize with a fragment shader
@param fragmentShaderFilename Filename of fragment shader to load
*/
- (id)initWithFragmentShaderFromFile:(NSString *)fragmentShaderFilename;
- (void)initializeAttributes;
- (void)setupFilterForSize:(CGSize)filterFrameSize;
- (CGSize)rotatedSize:(CGSize)sizeToRotate forIndex:(NSInteger)textureIndex;
- (CGPoint)rotatedPoint:(CGPoint)pointToRotate forRotation:(GPUImageRotationMode)rotation;
/// @name Managing the display FBOs
/** Size of the frame buffer object
*/
- (CGSize)sizeOfFBO;
/// @name Rendering
+ (const GLfloat *)textureCoordinatesForRotation:(GPUImageRotationMode)rotationMode;
- (void)renderToTextureWithVertices:(const GLfloat *)vertices textureCoordinates:(const GLfloat *)textureCoordinates;
- (void)informTargetsAboutNewFrameAtTime:(CMTime)frameTime;
- (CGSize)outputFrameSize;
/// @name Input parameters
- (void)setBackgroundColorRed:(GLfloat)redComponent green:(GLfloat)greenComponent blue:(GLfloat)blueComponent alpha:(GLfloat)alphaComponent;
- (void)setInteger:(GLint)newInteger forUniformName:(NSString *)uniformName;
- (void)setFloat:(GLfloat)newFloat forUniformName:(NSString *)uniformName;
- (void)setSize:(CGSize)newSize forUniformName:(NSString *)uniformName;
- (void)setPoint:(CGPoint)newPoint forUniformName:(NSString *)uniformName;
- (void)setFloatVec3:(GPUVector3)newVec3 forUniformName:(NSString *)uniformName;
- (void)setFloatVec4:(GPUVector4)newVec4 forUniform:(NSString *)uniformName;
- (void)setFloatArray:(GLfloat *)array length:(GLsizei)count forUniform:(NSString*)uniformName;
- (void)setMatrix3f:(GPUMatrix3x3)matrix forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;
- (void)setMatrix4f:(GPUMatrix4x4)matrix forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;
- (void)setFloat:(GLfloat)floatValue forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;
- (void)setPoint:(CGPoint)pointValue forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;
- (void)setSize:(CGSize)sizeValue forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;
- (void)setVec3:(GPUVector3)vectorValue forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;
- (void)setVec4:(GPUVector4)vectorValue forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;
- (void)setFloatArray:(GLfloat *)arrayValue length:(GLsizei)arrayLength forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;
- (void)setInteger:(GLint)intValue forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;
- (void)setAndExecuteUniformStateCallbackAtIndex:(GLint)uniform forProgram:(GLProgram *)shaderProgram toBlock:(dispatch_block_t)uniformStateBlock;
- (void)setUniformsForProgramAtIndex:(NSUInteger)programIndex;
@end

View File

@ -0,0 +1,753 @@
#import "GPUImageFilter.h"
#import "GPUImagePicture.h"
#import <AVFoundation/AVFoundation.h>
// Hardcode the vertex shader for standard filters, but this can be overridden
NSString *const kGPUImageVertexShaderString = SHADER_STRING
(
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
varying vec2 textureCoordinate;
void main()
{
gl_Position = position;
textureCoordinate = inputTextureCoordinate.xy;
}
);
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kGPUImagePassthroughFragmentShaderString = SHADER_STRING
(
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
gl_FragColor = texture2D(inputImageTexture, textureCoordinate);
}
);
#else
NSString *const kGPUImagePassthroughFragmentShaderString = SHADER_STRING
(
varying vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
gl_FragColor = texture2D(inputImageTexture, textureCoordinate);
}
);
#endif
@implementation GPUImageFilter
@synthesize preventRendering = _preventRendering;
@synthesize currentlyReceivingMonochromeInput;
#pragma mark -
#pragma mark Initialization and teardown
- (id)initWithVertexShaderFromString:(NSString *)vertexShaderString fragmentShaderFromString:(NSString *)fragmentShaderString;
{
if (!(self = [super init]))
{
return nil;
}
uniformStateRestorationBlocks = [NSMutableDictionary dictionaryWithCapacity:10];
_preventRendering = NO;
currentlyReceivingMonochromeInput = NO;
inputRotation = kGPUImageNoRotation;
backgroundColorRed = 0.0;
backgroundColorGreen = 0.0;
backgroundColorBlue = 0.0;
backgroundColorAlpha = 0.0;
imageCaptureSemaphore = dispatch_semaphore_create(0);
dispatch_semaphore_signal(imageCaptureSemaphore);
runSynchronouslyOnVideoProcessingQueue(^{
[GPUImageContext useImageProcessingContext];
filterProgram = [[GPUImageContext sharedImageProcessingContext] programForVertexShaderString:vertexShaderString fragmentShaderString:fragmentShaderString];
if (!filterProgram.initialized)
{
[self initializeAttributes];
if (![filterProgram link])
{
NSString *progLog = [filterProgram programLog];
NSLog(@"Program link log: %@", progLog);
NSString *fragLog = [filterProgram fragmentShaderLog];
NSLog(@"Fragment shader compile log: %@", fragLog);
NSString *vertLog = [filterProgram vertexShaderLog];
NSLog(@"Vertex shader compile log: %@", vertLog);
filterProgram = nil;
NSAssert(NO, @"Filter shader link failed");
}
}
filterPositionAttribute = [filterProgram attributeIndex:@"position"];
filterTextureCoordinateAttribute = [filterProgram attributeIndex:@"inputTextureCoordinate"];
filterInputTextureUniform = [filterProgram uniformIndex:@"inputImageTexture"]; // This does assume a name of "inputImageTexture" for the fragment shader
[GPUImageContext setActiveShaderProgram:filterProgram];
glEnableVertexAttribArray(filterPositionAttribute);
glEnableVertexAttribArray(filterTextureCoordinateAttribute);
});
return self;
}
- (id)initWithFragmentShaderFromString:(NSString *)fragmentShaderString;
{
if (!(self = [self initWithVertexShaderFromString:kGPUImageVertexShaderString fragmentShaderFromString:fragmentShaderString]))
{
return nil;
}
return self;
}
- (id)initWithFragmentShaderFromFile:(NSString *)fragmentShaderFilename;
{
NSString *fragmentShaderPathname = [[NSBundle mainBundle] pathForResource:fragmentShaderFilename ofType:@"fsh"];
NSString *fragmentShaderString = [NSString stringWithContentsOfFile:fragmentShaderPathname encoding:NSUTF8StringEncoding error:nil];
if (!(self = [self initWithFragmentShaderFromString:fragmentShaderString]))
{
return nil;
}
return self;
}
- (id)init;
{
if (!(self = [self initWithFragmentShaderFromString:kGPUImagePassthroughFragmentShaderString]))
{
return nil;
}
return self;
}
- (void)initializeAttributes;
{
[filterProgram addAttribute:@"position"];
[filterProgram addAttribute:@"inputTextureCoordinate"];
// Override this, calling back to this super method, in order to add new attributes to your vertex shader
}
- (void)setupFilterForSize:(CGSize)filterFrameSize;
{
// This is where you can override to provide some custom setup, if your filter has a size-dependent element
}
- (void)dealloc
{
#if !OS_OBJECT_USE_OBJC
if (imageCaptureSemaphore != NULL)
{
dispatch_release(imageCaptureSemaphore);
}
#endif
}
#pragma mark -
#pragma mark Still image processing
- (void)useNextFrameForImageCapture;
{
usingNextFrameForImageCapture = YES;
// Set the semaphore high, if it isn't already
if (dispatch_semaphore_wait(imageCaptureSemaphore, DISPATCH_TIME_NOW) != 0)
{
return;
}
}
- (CGImageRef)newCGImageFromCurrentlyProcessedOutput
{
// Give it three seconds to process, then abort if they forgot to set up the image capture properly
double timeoutForImageCapture = 3.0;
dispatch_time_t convertedTimeout = dispatch_time(DISPATCH_TIME_NOW, timeoutForImageCapture * NSEC_PER_SEC);
if (dispatch_semaphore_wait(imageCaptureSemaphore, convertedTimeout) != 0)
{
return NULL;
}
GPUImageFramebuffer* framebuffer = [self framebufferForOutput];
usingNextFrameForImageCapture = NO;
dispatch_semaphore_signal(imageCaptureSemaphore);
CGImageRef image = [framebuffer newCGImageFromFramebufferContents];
return image;
}
#pragma mark -
#pragma mark Managing the display FBOs
- (CGSize)sizeOfFBO;
{
CGSize outputSize = [self maximumOutputSize];
if ( (CGSizeEqualToSize(outputSize, CGSizeZero)) || (inputTextureSize.width < outputSize.width) )
{
return inputTextureSize;
}
else
{
return outputSize;
}
}
#pragma mark -
#pragma mark Rendering
+ (const GLfloat *)textureCoordinatesForRotation:(GPUImageRotationMode)rotationMode;
{
static const GLfloat noRotationTextureCoordinates[] = {
0.0f, 0.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
};
static const GLfloat rotateLeftTextureCoordinates[] = {
1.0f, 0.0f,
1.0f, 1.0f,
0.0f, 0.0f,
0.0f, 1.0f,
};
static const GLfloat rotateRightTextureCoordinates[] = {
0.0f, 1.0f,
0.0f, 0.0f,
1.0f, 1.0f,
1.0f, 0.0f,
};
static const GLfloat verticalFlipTextureCoordinates[] = {
0.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
};
static const GLfloat horizontalFlipTextureCoordinates[] = {
1.0f, 0.0f,
0.0f, 0.0f,
1.0f, 1.0f,
0.0f, 1.0f,
};
static const GLfloat rotateRightVerticalFlipTextureCoordinates[] = {
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 0.0f,
1.0f, 1.0f,
};
static const GLfloat rotateRightHorizontalFlipTextureCoordinates[] = {
1.0f, 1.0f,
1.0f, 0.0f,
0.0f, 1.0f,
0.0f, 0.0f,
};
static const GLfloat rotate180TextureCoordinates[] = {
1.0f, 1.0f,
0.0f, 1.0f,
1.0f, 0.0f,
0.0f, 0.0f,
};
switch(rotationMode)
{
case kGPUImageNoRotation: return noRotationTextureCoordinates;
case kGPUImageRotateLeft: return rotateLeftTextureCoordinates;
case kGPUImageRotateRight: return rotateRightTextureCoordinates;
case kGPUImageFlipVertical: return verticalFlipTextureCoordinates;
case kGPUImageFlipHorizonal: return horizontalFlipTextureCoordinates;
case kGPUImageRotateRightFlipVertical: return rotateRightVerticalFlipTextureCoordinates;
case kGPUImageRotateRightFlipHorizontal: return rotateRightHorizontalFlipTextureCoordinates;
case kGPUImageRotate180: return rotate180TextureCoordinates;
}
}
- (void)renderToTextureWithVertices:(const GLfloat *)vertices textureCoordinates:(const GLfloat *)textureCoordinates;
{
if (self.preventRendering)
{
[firstInputFramebuffer unlock];
return;
}
[GPUImageContext setActiveShaderProgram:filterProgram];
outputFramebuffer = [[GPUImageContext sharedFramebufferCache] fetchFramebufferForSize:[self sizeOfFBO] textureOptions:self.outputTextureOptions onlyTexture:NO];
[outputFramebuffer activateFramebuffer];
if (usingNextFrameForImageCapture)
{
[outputFramebuffer lock];
}
[self setUniformsForProgramAtIndex:0];
glClearColor(backgroundColorRed, backgroundColorGreen, backgroundColorBlue, backgroundColorAlpha);
glClear(GL_COLOR_BUFFER_BIT);
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, [firstInputFramebuffer texture]);
glUniform1i(filterInputTextureUniform, 2);
glVertexAttribPointer(filterPositionAttribute, 2, GL_FLOAT, 0, 0, vertices);
glVertexAttribPointer(filterTextureCoordinateAttribute, 2, GL_FLOAT, 0, 0, textureCoordinates);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
[firstInputFramebuffer unlock];
if (usingNextFrameForImageCapture)
{
dispatch_semaphore_signal(imageCaptureSemaphore);
}
}
- (void)informTargetsAboutNewFrameAtTime:(CMTime)frameTime;
{
if (self.frameProcessingCompletionBlock != NULL)
{
self.frameProcessingCompletionBlock(self, frameTime);
}
// Get all targets the framebuffer so they can grab a lock on it
for (id<GPUImageInput> currentTarget in targets)
{
if (currentTarget != self.targetToIgnoreForUpdates)
{
NSInteger indexOfObject = [targets indexOfObject:currentTarget];
NSInteger textureIndex = [[targetTextureIndices objectAtIndex:indexOfObject] integerValue];
[self setInputFramebufferForTarget:currentTarget atIndex:textureIndex];
[currentTarget setInputSize:[self outputFrameSize] atIndex:textureIndex];
}
}
// Release our hold so it can return to the cache immediately upon processing
[[self framebufferForOutput] unlock];
if (usingNextFrameForImageCapture)
{
// usingNextFrameForImageCapture = NO;
}
else
{
[self removeOutputFramebuffer];
}
// Trigger processing last, so that our unlock comes first in serial execution, avoiding the need for a callback
for (id<GPUImageInput> currentTarget in targets)
{
if (currentTarget != self.targetToIgnoreForUpdates)
{
NSInteger indexOfObject = [targets indexOfObject:currentTarget];
NSInteger textureIndex = [[targetTextureIndices objectAtIndex:indexOfObject] integerValue];
[currentTarget newFrameReadyAtTime:frameTime atIndex:textureIndex];
}
}
}
- (CGSize)outputFrameSize;
{
return inputTextureSize;
}
#pragma mark -
#pragma mark Input parameters
- (void)setBackgroundColorRed:(GLfloat)redComponent green:(GLfloat)greenComponent blue:(GLfloat)blueComponent alpha:(GLfloat)alphaComponent;
{
backgroundColorRed = redComponent;
backgroundColorGreen = greenComponent;
backgroundColorBlue = blueComponent;
backgroundColorAlpha = alphaComponent;
}
- (void)setInteger:(GLint)newInteger forUniformName:(NSString *)uniformName;
{
GLint uniformIndex = [filterProgram uniformIndex:uniformName];
[self setInteger:newInteger forUniform:uniformIndex program:filterProgram];
}
- (void)setFloat:(GLfloat)newFloat forUniformName:(NSString *)uniformName;
{
GLint uniformIndex = [filterProgram uniformIndex:uniformName];
[self setFloat:newFloat forUniform:uniformIndex program:filterProgram];
}
- (void)setSize:(CGSize)newSize forUniformName:(NSString *)uniformName;
{
GLint uniformIndex = [filterProgram uniformIndex:uniformName];
[self setSize:newSize forUniform:uniformIndex program:filterProgram];
}
- (void)setPoint:(CGPoint)newPoint forUniformName:(NSString *)uniformName;
{
GLint uniformIndex = [filterProgram uniformIndex:uniformName];
[self setPoint:newPoint forUniform:uniformIndex program:filterProgram];
}
- (void)setFloatVec3:(GPUVector3)newVec3 forUniformName:(NSString *)uniformName;
{
GLint uniformIndex = [filterProgram uniformIndex:uniformName];
[self setVec3:newVec3 forUniform:uniformIndex program:filterProgram];
}
- (void)setFloatVec4:(GPUVector4)newVec4 forUniform:(NSString *)uniformName;
{
GLint uniformIndex = [filterProgram uniformIndex:uniformName];
[self setVec4:newVec4 forUniform:uniformIndex program:filterProgram];
}
- (void)setFloatArray:(GLfloat *)array length:(GLsizei)count forUniform:(NSString*)uniformName
{
GLint uniformIndex = [filterProgram uniformIndex:uniformName];
[self setFloatArray:array length:count forUniform:uniformIndex program:filterProgram];
}
- (void)setMatrix3f:(GPUMatrix3x3)matrix forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;
{
runAsynchronouslyOnVideoProcessingQueue(^{
[GPUImageContext setActiveShaderProgram:shaderProgram];
[self setAndExecuteUniformStateCallbackAtIndex:uniform forProgram:shaderProgram toBlock:^{
glUniformMatrix3fv(uniform, 1, GL_FALSE, (GLfloat *)&matrix);
}];
});
}
- (void)setMatrix4f:(GPUMatrix4x4)matrix forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;
{
runAsynchronouslyOnVideoProcessingQueue(^{
[GPUImageContext setActiveShaderProgram:shaderProgram];
[self setAndExecuteUniformStateCallbackAtIndex:uniform forProgram:shaderProgram toBlock:^{
glUniformMatrix4fv(uniform, 1, GL_FALSE, (GLfloat *)&matrix);
}];
});
}
- (void)setFloat:(GLfloat)floatValue forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;
{
runAsynchronouslyOnVideoProcessingQueue(^{
[GPUImageContext setActiveShaderProgram:shaderProgram];
[self setAndExecuteUniformStateCallbackAtIndex:uniform forProgram:shaderProgram toBlock:^{
glUniform1f(uniform, floatValue);
}];
});
}
- (void)setPoint:(CGPoint)pointValue forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;
{
runAsynchronouslyOnVideoProcessingQueue(^{
[GPUImageContext setActiveShaderProgram:shaderProgram];
[self setAndExecuteUniformStateCallbackAtIndex:uniform forProgram:shaderProgram toBlock:^{
GLfloat positionArray[2];
positionArray[0] = pointValue.x;
positionArray[1] = pointValue.y;
glUniform2fv(uniform, 1, positionArray);
}];
});
}
- (void)setSize:(CGSize)sizeValue forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;
{
runAsynchronouslyOnVideoProcessingQueue(^{
[GPUImageContext setActiveShaderProgram:shaderProgram];
[self setAndExecuteUniformStateCallbackAtIndex:uniform forProgram:shaderProgram toBlock:^{
GLfloat sizeArray[2];
sizeArray[0] = sizeValue.width;
sizeArray[1] = sizeValue.height;
glUniform2fv(uniform, 1, sizeArray);
}];
});
}
- (void)setVec3:(GPUVector3)vectorValue forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;
{
runAsynchronouslyOnVideoProcessingQueue(^{
[GPUImageContext setActiveShaderProgram:shaderProgram];
[self setAndExecuteUniformStateCallbackAtIndex:uniform forProgram:shaderProgram toBlock:^{
glUniform3fv(uniform, 1, (GLfloat *)&vectorValue);
}];
});
}
- (void)setVec4:(GPUVector4)vectorValue forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;
{
runAsynchronouslyOnVideoProcessingQueue(^{
[GPUImageContext setActiveShaderProgram:shaderProgram];
[self setAndExecuteUniformStateCallbackAtIndex:uniform forProgram:shaderProgram toBlock:^{
glUniform4fv(uniform, 1, (GLfloat *)&vectorValue);
}];
});
}
- (void)setFloatArray:(GLfloat *)arrayValue length:(GLsizei)arrayLength forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;
{
// Make a copy of the data, so it doesn't get overwritten before async call executes
NSData* arrayData = [NSData dataWithBytes:arrayValue length:arrayLength * sizeof(arrayValue[0])];
runAsynchronouslyOnVideoProcessingQueue(^{
[GPUImageContext setActiveShaderProgram:shaderProgram];
[self setAndExecuteUniformStateCallbackAtIndex:uniform forProgram:shaderProgram toBlock:^{
glUniform1fv(uniform, arrayLength, [arrayData bytes]);
}];
});
}
- (void)setInteger:(GLint)intValue forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;
{
runAsynchronouslyOnVideoProcessingQueue(^{
[GPUImageContext setActiveShaderProgram:shaderProgram];
[self setAndExecuteUniformStateCallbackAtIndex:uniform forProgram:shaderProgram toBlock:^{
glUniform1i(uniform, intValue);
}];
});
}
- (void)setAndExecuteUniformStateCallbackAtIndex:(GLint)uniform forProgram:(GLProgram *)shaderProgram toBlock:(dispatch_block_t)uniformStateBlock;
{
[uniformStateRestorationBlocks setObject:[uniformStateBlock copy] forKey:[NSNumber numberWithInt:uniform]];
uniformStateBlock();
}
- (void)setUniformsForProgramAtIndex:(NSUInteger)programIndex;
{
[uniformStateRestorationBlocks enumerateKeysAndObjectsUsingBlock:^(id key, id obj, BOOL *stop){
dispatch_block_t currentBlock = obj;
currentBlock();
}];
}
#pragma mark -
#pragma mark GPUImageInput
- (void)newFrameReadyAtTime:(CMTime)frameTime atIndex:(NSInteger)textureIndex;
{
static const GLfloat imageVertices[] = {
-1.0f, -1.0f,
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, 1.0f,
};
[self renderToTextureWithVertices:imageVertices textureCoordinates:[[self class] textureCoordinatesForRotation:inputRotation]];
[self informTargetsAboutNewFrameAtTime:frameTime];
}
- (NSInteger)nextAvailableTextureIndex;
{
return 0;
}
- (void)setInputFramebuffer:(GPUImageFramebuffer *)newInputFramebuffer atIndex:(NSInteger)textureIndex;
{
firstInputFramebuffer = newInputFramebuffer;
[firstInputFramebuffer lock];
}
- (CGSize)rotatedSize:(CGSize)sizeToRotate forIndex:(NSInteger)textureIndex;
{
CGSize rotatedSize = sizeToRotate;
if (GPUImageRotationSwapsWidthAndHeight(inputRotation))
{
rotatedSize.width = sizeToRotate.height;
rotatedSize.height = sizeToRotate.width;
}
return rotatedSize;
}
- (CGPoint)rotatedPoint:(CGPoint)pointToRotate forRotation:(GPUImageRotationMode)rotation;
{
CGPoint rotatedPoint;
switch(rotation)
{
case kGPUImageNoRotation: return pointToRotate; break;
case kGPUImageFlipHorizonal:
{
rotatedPoint.x = 1.0 - pointToRotate.x;
rotatedPoint.y = pointToRotate.y;
}; break;
case kGPUImageFlipVertical:
{
rotatedPoint.x = pointToRotate.x;
rotatedPoint.y = 1.0 - pointToRotate.y;
}; break;
case kGPUImageRotateLeft:
{
rotatedPoint.x = 1.0 - pointToRotate.y;
rotatedPoint.y = pointToRotate.x;
}; break;
case kGPUImageRotateRight:
{
rotatedPoint.x = pointToRotate.y;
rotatedPoint.y = 1.0 - pointToRotate.x;
}; break;
case kGPUImageRotateRightFlipVertical:
{
rotatedPoint.x = pointToRotate.y;
rotatedPoint.y = pointToRotate.x;
}; break;
case kGPUImageRotateRightFlipHorizontal:
{
rotatedPoint.x = 1.0 - pointToRotate.y;
rotatedPoint.y = 1.0 - pointToRotate.x;
}; break;
case kGPUImageRotate180:
{
rotatedPoint.x = 1.0 - pointToRotate.x;
rotatedPoint.y = 1.0 - pointToRotate.y;
}; break;
}
return rotatedPoint;
}
- (void)setInputSize:(CGSize)newSize atIndex:(NSInteger)textureIndex;
{
if (self.preventRendering)
{
return;
}
if (overrideInputSize)
{
if (CGSizeEqualToSize(forcedMaximumSize, CGSizeZero))
{
}
else
{
CGRect insetRect = AVMakeRectWithAspectRatioInsideRect(newSize, CGRectMake(0.0, 0.0, forcedMaximumSize.width, forcedMaximumSize.height));
inputTextureSize = insetRect.size;
}
}
else
{
CGSize rotatedSize = [self rotatedSize:newSize forIndex:textureIndex];
if (CGSizeEqualToSize(rotatedSize, CGSizeZero))
{
inputTextureSize = rotatedSize;
}
else if (!CGSizeEqualToSize(inputTextureSize, rotatedSize))
{
inputTextureSize = rotatedSize;
}
}
[self setupFilterForSize:[self sizeOfFBO]];
}
- (void)setInputRotation:(GPUImageRotationMode)newInputRotation atIndex:(NSInteger)textureIndex;
{
inputRotation = newInputRotation;
}
- (void)forceProcessingAtSize:(CGSize)frameSize;
{
if (CGSizeEqualToSize(frameSize, CGSizeZero))
{
overrideInputSize = NO;
}
else
{
overrideInputSize = YES;
inputTextureSize = frameSize;
forcedMaximumSize = CGSizeZero;
}
}
- (void)forceProcessingAtSizeRespectingAspectRatio:(CGSize)frameSize;
{
if (CGSizeEqualToSize(frameSize, CGSizeZero))
{
overrideInputSize = NO;
inputTextureSize = CGSizeZero;
forcedMaximumSize = CGSizeZero;
}
else
{
overrideInputSize = YES;
forcedMaximumSize = frameSize;
}
}
- (CGSize)maximumOutputSize;
{
// I'm temporarily disabling adjustments for smaller output sizes until I figure out how to make this work better
return CGSizeZero;
/*
if (CGSizeEqualToSize(cachedMaximumOutputSize, CGSizeZero))
{
for (id<GPUImageInput> currentTarget in targets)
{
if ([currentTarget maximumOutputSize].width > cachedMaximumOutputSize.width)
{
cachedMaximumOutputSize = [currentTarget maximumOutputSize];
}
}
}
return cachedMaximumOutputSize;
*/
}
- (void)endProcessing
{
if (!isEndProcessing)
{
isEndProcessing = YES;
for (id<GPUImageInput> currentTarget in targets)
{
[currentTarget endProcessing];
}
}
}
- (BOOL)wantsMonochromeInput;
{
return NO;
}
#pragma mark -
#pragma mark Accessors
@end

View File

@ -0,0 +1,19 @@
#import "GPUImageOutput.h"
#import "GPUImageFilter.h"
@interface GPUImageFilterGroup : GPUImageOutput <GPUImageInput>
{
NSMutableArray *filters;
BOOL isEndProcessing;
}
@property(readwrite, nonatomic, strong) GPUImageOutput<GPUImageInput> *terminalFilter;
@property(readwrite, nonatomic, strong) NSArray *initialFilters;
@property(readwrite, nonatomic, strong) GPUImageOutput<GPUImageInput> *inputFilterToIgnoreForUpdates;
// Filter management
- (void)addFilter:(GPUImageOutput<GPUImageInput> *)newFilter;
- (GPUImageOutput<GPUImageInput> *)filterAtIndex:(NSUInteger)filterIndex;
- (NSUInteger)filterCount;
@end

View File

@ -0,0 +1,208 @@
#import "GPUImageFilterGroup.h"
#import "GPUImagePicture.h"
@implementation GPUImageFilterGroup
@synthesize terminalFilter = _terminalFilter;
@synthesize initialFilters = _initialFilters;
@synthesize inputFilterToIgnoreForUpdates = _inputFilterToIgnoreForUpdates;
- (id)init;
{
if (!(self = [super init]))
{
return nil;
}
filters = [[NSMutableArray alloc] init];
return self;
}
#pragma mark -
#pragma mark Filter management
- (void)addFilter:(GPUImageOutput<GPUImageInput> *)newFilter;
{
[filters addObject:newFilter];
}
- (GPUImageOutput<GPUImageInput> *)filterAtIndex:(NSUInteger)filterIndex;
{
return [filters objectAtIndex:filterIndex];
}
- (NSUInteger)filterCount;
{
return [filters count];
}
#pragma mark -
#pragma mark Still image processing
- (void)useNextFrameForImageCapture;
{
[self.terminalFilter useNextFrameForImageCapture];
}
- (CGImageRef)newCGImageFromCurrentlyProcessedOutput;
{
return [self.terminalFilter newCGImageFromCurrentlyProcessedOutput];
}
#pragma mark -
#pragma mark GPUImageOutput overrides
- (void)setTargetToIgnoreForUpdates:(id<GPUImageInput>)targetToIgnoreForUpdates;
{
[_terminalFilter setTargetToIgnoreForUpdates:targetToIgnoreForUpdates];
}
- (void)addTarget:(id<GPUImageInput>)newTarget atTextureLocation:(NSInteger)textureLocation;
{
[_terminalFilter addTarget:newTarget atTextureLocation:textureLocation];
}
- (void)removeTarget:(id<GPUImageInput>)targetToRemove;
{
[_terminalFilter removeTarget:targetToRemove];
}
- (void)removeAllTargets;
{
[_terminalFilter removeAllTargets];
}
- (NSArray *)targets;
{
return [_terminalFilter targets];
}
- (void)setFrameProcessingCompletionBlock:(void (^)(GPUImageOutput *, CMTime))frameProcessingCompletionBlock;
{
[_terminalFilter setFrameProcessingCompletionBlock:frameProcessingCompletionBlock];
}
- (void (^)(GPUImageOutput *, CMTime))frameProcessingCompletionBlock;
{
return [_terminalFilter frameProcessingCompletionBlock];
}
#pragma mark -
#pragma mark GPUImageInput protocol
- (void)newFrameReadyAtTime:(CMTime)frameTime atIndex:(NSInteger)textureIndex;
{
for (GPUImageOutput<GPUImageInput> *currentFilter in _initialFilters)
{
if (currentFilter != self.inputFilterToIgnoreForUpdates)
{
[currentFilter newFrameReadyAtTime:frameTime atIndex:textureIndex];
}
}
}
- (void)setInputFramebuffer:(GPUImageFramebuffer *)newInputFramebuffer atIndex:(NSInteger)textureIndex;
{
for (GPUImageOutput<GPUImageInput> *currentFilter in _initialFilters)
{
[currentFilter setInputFramebuffer:newInputFramebuffer atIndex:textureIndex];
}
}
- (NSInteger)nextAvailableTextureIndex;
{
// if ([_initialFilters count] > 0)
// {
// return [[_initialFilters objectAtIndex:0] nextAvailableTextureIndex];
// }
return 0;
}
- (void)setInputSize:(CGSize)newSize atIndex:(NSInteger)textureIndex;
{
for (GPUImageOutput<GPUImageInput> *currentFilter in _initialFilters)
{
[currentFilter setInputSize:newSize atIndex:textureIndex];
}
}
- (void)setInputRotation:(GPUImageRotationMode)newInputRotation atIndex:(NSInteger)textureIndex;
{
for (GPUImageOutput<GPUImageInput> *currentFilter in _initialFilters)
{
[currentFilter setInputRotation:newInputRotation atIndex:(NSInteger)textureIndex];
}
}
- (void)forceProcessingAtSize:(CGSize)frameSize;
{
for (GPUImageOutput<GPUImageInput> *currentFilter in filters)
{
[currentFilter forceProcessingAtSize:frameSize];
}
}
- (void)forceProcessingAtSizeRespectingAspectRatio:(CGSize)frameSize;
{
for (GPUImageOutput<GPUImageInput> *currentFilter in filters)
{
[currentFilter forceProcessingAtSizeRespectingAspectRatio:frameSize];
}
}
- (CGSize)maximumOutputSize;
{
// I'm temporarily disabling adjustments for smaller output sizes until I figure out how to make this work better
return CGSizeZero;
/*
if (CGSizeEqualToSize(cachedMaximumOutputSize, CGSizeZero))
{
for (id<GPUImageInput> currentTarget in _initialFilters)
{
if ([currentTarget maximumOutputSize].width > cachedMaximumOutputSize.width)
{
cachedMaximumOutputSize = [currentTarget maximumOutputSize];
}
}
}
return cachedMaximumOutputSize;
*/
}
- (void)endProcessing;
{
if (!isEndProcessing)
{
isEndProcessing = YES;
for (id<GPUImageInput> currentTarget in _initialFilters)
{
[currentTarget endProcessing];
}
}
}
- (BOOL)wantsMonochromeInput;
{
BOOL allInputsWantMonochromeInput = YES;
for (GPUImageOutput<GPUImageInput> *currentFilter in _initialFilters)
{
allInputsWantMonochromeInput = allInputsWantMonochromeInput && [currentFilter wantsMonochromeInput];
}
return allInputsWantMonochromeInput;
}
- (void)setCurrentlyReceivingMonochromeInput:(BOOL)newValue;
{
for (GPUImageOutput<GPUImageInput> *currentFilter in _initialFilters)
{
[currentFilter setCurrentlyReceivingMonochromeInput:newValue];
}
}
@end

View File

@ -0,0 +1,30 @@
#import <Foundation/Foundation.h>
#import "GPUImageOutput.h"
@interface GPUImageFilterPipeline : NSObject
{
NSString *stringValue;
}
@property (strong) NSMutableArray *filters;
@property (strong) GPUImageOutput *input;
@property (strong) id <GPUImageInput> output;
- (id) initWithOrderedFilters:(NSArray*) filters input:(GPUImageOutput*)input output:(id <GPUImageInput>)output;
- (id) initWithConfiguration:(NSDictionary*) configuration input:(GPUImageOutput*)input output:(id <GPUImageInput>)output;
- (id) initWithConfigurationFile:(NSURL*) configuration input:(GPUImageOutput*)input output:(id <GPUImageInput>)output;
- (void) addFilter:(GPUImageOutput<GPUImageInput> *)filter;
- (void) addFilter:(GPUImageOutput<GPUImageInput> *)filter atIndex:(NSUInteger)insertIndex;
- (void) replaceFilterAtIndex:(NSUInteger)index withFilter:(GPUImageOutput<GPUImageInput> *)filter;
- (void) replaceAllFilters:(NSArray *) newFilters;
- (void) removeFilter:(GPUImageOutput<GPUImageInput> *)filter;
- (void) removeFilterAtIndex:(NSUInteger)index;
- (void) removeAllFilters;
- (UIImage *) currentFilteredFrame;
- (UIImage *) currentFilteredFrameWithOrientation:(UIImageOrientation)imageOrientation;
- (CGImageRef) newCGImageFromCurrentFilteredFrame;
@end

View File

@ -0,0 +1,218 @@
#import "GPUImageFilterPipeline.h"
@interface GPUImageFilterPipeline ()
- (BOOL)_parseConfiguration:(NSDictionary *)configuration;
- (void)_refreshFilters;
@end
@implementation GPUImageFilterPipeline
@synthesize filters = _filters, input = _input, output = _output;
#pragma mark Config file init
- (id)initWithConfiguration:(NSDictionary *)configuration input:(GPUImageOutput *)input output:(id <GPUImageInput>)output {
self = [super init];
if (self) {
self.input = input;
self.output = output;
if (![self _parseConfiguration:configuration]) {
NSLog(@"Sorry, a parsing error occurred.");
abort();
}
[self _refreshFilters];
}
return self;
}
- (id)initWithConfigurationFile:(NSURL *)configuration input:(GPUImageOutput *)input output:(id <GPUImageInput>)output {
return [self initWithConfiguration:[NSDictionary dictionaryWithContentsOfURL:configuration] input:input output:output];
}
- (BOOL)_parseConfiguration:(NSDictionary *)configuration {
NSArray *filters = [configuration objectForKey:@"Filters"];
if (!filters) {
return NO;
}
NSError *regexError = nil;
NSRegularExpression *parsingRegex = [NSRegularExpression regularExpressionWithPattern:@"(float|CGPoint|NSString)\\((.*?)(?:,\\s*(.*?))*\\)"
options:0
error:&regexError];
// It's faster to put them into an array and then pass it to the filters property than it is to call [self addFilter:] every time
NSMutableArray *orderedFilters = [NSMutableArray arrayWithCapacity:[filters count]];
for (NSDictionary *filter in filters) {
NSString *filterName = [filter objectForKey:@"FilterName"];
Class theClass = NSClassFromString(filterName);
GPUImageOutput<GPUImageInput> *genericFilter = [[theClass alloc] init];
// Set up the properties
NSDictionary *filterAttributes;
if ((filterAttributes = [filter objectForKey:@"Attributes"])) {
for (NSString *propertyKey in filterAttributes) {
// Set up the selector
SEL theSelector = NSSelectorFromString(propertyKey);
NSInvocation *inv = [NSInvocation invocationWithMethodSignature:[theClass instanceMethodSignatureForSelector:theSelector]];
[inv setSelector:theSelector];
[inv setTarget:genericFilter];
// check selector given with parameter
if ([propertyKey hasSuffix:@":"]) {
stringValue = nil;
// Then parse the arguments
NSMutableArray *parsedArray;
if ([[filterAttributes objectForKey:propertyKey] isKindOfClass:[NSArray class]]) {
NSArray *array = [filterAttributes objectForKey:propertyKey];
parsedArray = [NSMutableArray arrayWithCapacity:[array count]];
for (NSString *string in array) {
NSTextCheckingResult *parse = [parsingRegex firstMatchInString:string
options:0
range:NSMakeRange(0, [string length])];
NSString *modifier = [string substringWithRange:[parse rangeAtIndex:1]];
if ([modifier isEqualToString:@"float"]) {
// Float modifier, one argument
CGFloat value = [[string substringWithRange:[parse rangeAtIndex:2]] floatValue];
[parsedArray addObject:[NSNumber numberWithFloat:value]];
[inv setArgument:&value atIndex:2];
} else if ([modifier isEqualToString:@"CGPoint"]) {
// CGPoint modifier, two float arguments
CGFloat x = [[string substringWithRange:[parse rangeAtIndex:2]] floatValue];
CGFloat y = [[string substringWithRange:[parse rangeAtIndex:3]] floatValue];
CGPoint value = CGPointMake(x, y);
[parsedArray addObject:[NSValue valueWithCGPoint:value]];
} else if ([modifier isEqualToString:@"NSString"]) {
// NSString modifier, one string argument
stringValue = [[string substringWithRange:[parse rangeAtIndex:2]] copy];
[inv setArgument:&stringValue atIndex:2];
} else {
return NO;
}
}
[inv setArgument:&parsedArray atIndex:2];
} else {
NSString *string = [filterAttributes objectForKey:propertyKey];
NSTextCheckingResult *parse = [parsingRegex firstMatchInString:string
options:0
range:NSMakeRange(0, [string length])];
NSString *modifier = [string substringWithRange:[parse rangeAtIndex:1]];
if ([modifier isEqualToString:@"float"]) {
// Float modifier, one argument
CGFloat value = [[string substringWithRange:[parse rangeAtIndex:2]] floatValue];
[inv setArgument:&value atIndex:2];
} else if ([modifier isEqualToString:@"CGPoint"]) {
// CGPoint modifier, two float arguments
CGFloat x = [[string substringWithRange:[parse rangeAtIndex:2]] floatValue];
CGFloat y = [[string substringWithRange:[parse rangeAtIndex:3]] floatValue];
CGPoint value = CGPointMake(x, y);
[inv setArgument:&value atIndex:2];
} else if ([modifier isEqualToString:@"NSString"]) {
// NSString modifier, one string argument
stringValue = [[string substringWithRange:[parse rangeAtIndex:2]] copy];
[inv setArgument:&stringValue atIndex:2];
} else {
return NO;
}
}
}
[inv invoke];
}
}
[orderedFilters addObject:genericFilter];
}
self.filters = orderedFilters;
return YES;
}
#pragma mark Regular init
- (id)initWithOrderedFilters:(NSArray *)filters input:(GPUImageOutput *)input output:(id <GPUImageInput>)output {
self = [super init];
if (self) {
self.input = input;
self.output = output;
self.filters = [NSMutableArray arrayWithArray:filters];
[self _refreshFilters];
}
return self;
}
- (void)addFilter:(GPUImageOutput<GPUImageInput> *)filter atIndex:(NSUInteger)insertIndex {
[self.filters insertObject:filter atIndex:insertIndex];
[self _refreshFilters];
}
- (void)addFilter:(GPUImageOutput<GPUImageInput> *)filter {
[self.filters addObject:filter];
[self _refreshFilters];
}
- (void)replaceFilterAtIndex:(NSUInteger)index withFilter:(GPUImageOutput<GPUImageInput> *)filter {
[self.filters replaceObjectAtIndex:index withObject:filter];
[self _refreshFilters];
}
- (void) removeFilter:(GPUImageOutput<GPUImageInput> *)filter;
{
[self.filters removeObject:filter];
[self _refreshFilters];
}
- (void)removeFilterAtIndex:(NSUInteger)index {
[self.filters removeObjectAtIndex:index];
[self _refreshFilters];
}
- (void)removeAllFilters {
[self.filters removeAllObjects];
[self _refreshFilters];
}
- (void)replaceAllFilters:(NSArray *)newFilters {
self.filters = [NSMutableArray arrayWithArray:newFilters];
[self _refreshFilters];
}
- (void)_refreshFilters {
id prevFilter = self.input;
GPUImageOutput<GPUImageInput> *theFilter = nil;
for (int i = 0; i < [self.filters count]; i++) {
theFilter = [self.filters objectAtIndex:i];
[prevFilter removeAllTargets];
[prevFilter addTarget:theFilter];
prevFilter = theFilter;
}
[prevFilter removeAllTargets];
if (self.output != nil) {
[prevFilter addTarget:self.output];
}
}
- (UIImage *)currentFilteredFrame {
return [(GPUImageOutput<GPUImageInput> *)[_filters lastObject] imageFromCurrentFramebuffer];
}
- (UIImage *)currentFilteredFrameWithOrientation:(UIImageOrientation)imageOrientation {
return [(GPUImageOutput<GPUImageInput> *)[_filters lastObject] imageFromCurrentFramebufferWithOrientation:imageOrientation];
}
- (CGImageRef)newCGImageFromCurrentFilteredFrame {
return [(GPUImageOutput<GPUImageInput> *)[_filters lastObject] newCGImageFromCurrentlyProcessedOutput];
}
@end

Some files were not shown because too many files have changed in this diff Show More