Java Convolution algorithm not working.

Hi, I’m new to GitHub.

I’m writing a little image renderer in Java, so I also implemented a convolution algorithm, but I get a weird effect.

For example, here is the photo of a simple image, after the convolution is computed:

Original image:

Computed image:


And I used this matrix:

-1, -1, -1

-1, 8,  -1

-1, -1, -1

Which is an edge detection matrix.

Now, why do I get a wrong result.

Here it is the algorithm code:

public void convolve(Kernel kernel, int mult){
        int kwidth = kernel.width;
        int kheight = kernel.height;

        int widthrad = kwidth >>> 1;
        int heightrad = kheight >>> 1;

        int i, j, k, t; //indexes

        double value;

        double[][] kdata = kernel.kernel; //extracts the data from the kernel.

        for(i = 0; i < width; i++){
            for(j = 0; j < height; j++){
                value = 0;

                for(k = 0; k < kwidth; k++){
                    for(t = 0; t < kheight; t++){
                        value += kdata[k][t]*get2DCoordPixel(
                                getBound(i+k-widthrad, width),
                                getBound(j+t-heightrad, height));

                drawPixel(i, j, (int) round(value*mult));

private int getBound(int val, int end){
        if(val < 0) return 0;
        if(val < end) return val;
        return end - 1;

What am I doing wrong?


EDIT: Also, this algorithm is very slow. In my tests it takes about 100ms to compute a 1920x1080 image (using i3 8100 CPU). Is there a better alternative?

Hi @frankprog03,

This post of yours was moved to a different board that fits your topic of discussion a bit better. This means you’ll get better engagement on your post, and it keeps our Community organized so users can more easily find information.

As you’ll notice, your Topic is now in the ‘Project Development Help and Advice’ board.

Let me know if you have any other questions or if I can help with anything else.


Thank you very much! I’ll keep that in mind next time i will ask a new question.

1 Like

It is difficult to be certain, as you don’t provide the full source or a link to the repository.

My first question would be whether you are taking the three bytes of RGB into account? I.e. unless you convert your input image to monochrome first, you actually have three color bytes, _each_ of which have to have the matrix applied. Then you re-combine the three convoluted bytes to produce the output.

This link might provide some useful background / pointers / code.

Get the algorithm working first, then we could discuss optimization. [E.g. an obvious optimization would be to not perform getBound() unless you “know” you are close to the edge.]

1 Like

Hi. Thanks for the reply.

Yes, the problem was the algorithm (it was totally not working) because it was operating directly on colours when he had to operate on single r g or b. So I rewrote the algorithm.

public void convolve(Kernel kernel, double mult){
        int kwidth = kernel.width;
        int kheight = kernel.height;

        int i, j;

        double value;

        double[][] kdata = kernel.kernel;

        int color0;

        int[][] r = new int[width][height],
                g = new int[width][height],
                b = new int[width][height];

        for(i = 0; i < width; i++){
            for(j = 0; j < height; j++){
                color0 = get2DCoordPixel(i, j);

                r[i][j] = (color0 - (color0 & 0x00FFFF)) >> 16;
                g[i][j] = (color0 - (color0 & 0xFF00FF)) >> 8;
                b[i][j] = (color0 - (color0 & 0xFFFF00));

        double nf = 0;

        for(i = 0; i < kwidth; i++){
            for(j = 0; j < kheight; j++){
                nf += kdata[i][j];

        for(i = 0; i < width; i++){
            for(j = 0; j < height; j++){
                drawPixel(i, j, (((int)convolvePixel(r, kdata, i, j, kwidth,
                        kheight, mult, nf) << 16)+
                                 ((int)convolvePixel(g, kdata, i, j, kwidth,
                        kheight, mult, nf) << 8) +
                                 ((int)convolvePixel(b, kdata, i, j, kwidth,
                        kheight, mult, nf))));

    private double convolvePixel(int[][] cdata, double[][] kdata, int x, int y,
            int kw, int kh, double mult, double normfactor){
        double out = 0;

        for(int i = 0; i < kw; ++i){
            for(int j = 0; j < kh; j++){
                out += cdata[x][y]*kdata[i][j];

        return(out < 255) ? out*mult/normfactor : 255;

As you can see, it now “extracts” R, G and B from pixels, then computes them pixel by pixel.

So, it works so and so…

The problem, now, is the brightness. When I run the algorithm, I have a too bright image.

Cattura.PNG Cattura2.PNG Cattura.PNG Cattura3.PNG

The first one is the original, then a 1 1 1; 1 1 1; 1 1 1 kernel convolution, then a 2/9 2/9 2/9; 2/9 2/9 2/9; 2/9 2/9 2/9.

I think I should pass to Discrete Fourier Transform.

Sorry for not keeping on top of this.

This article points out the image brightness will change if the sum of the convolution matrix is not 1.0. Both the matrices you mention in the last post do not sum to 1.0.

Does the edge detection matrix work? [the original matrix]

What is the implementation of 


?  This is likely to be part of why it is slow.