List of regular polytopes: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Tomruen
en>Tomruen
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
[[File:Model of a segmented femur - journal.pone.0079004.g005.png|thumb|Model of a segmented [[femur]]. It shows the outer surface (red), the surface between compact bone and spongy bone (green) and the surface of the bone marrow (blue).]]
"Why does my computer keep freezing up?" I was asked by a great deal of people the cause of their computer freeze problems. And I am fed up with spending much time in answering the query time plus time again. This article is to tell you the real cause of your PC Freezes.<br><br>You may find which there are registry products that are free plus those which you will have to pay a nominal sum for. Some registry cleaners offer a bare bones program for free with the way of upgrading to a more advanced, powerful variation of the same system.<br><br>Registry cleaning is significant considering the registry can get crowded plus messy whenever it's left unchecked. False entries send the operating system searching for files and directories that have long ago been deleted. This takes time and utilizes precious resources. So, a slowdown inevitably takes place. It is specifically noticeable whenever you multitask.<br><br>Always see to it which you have installed antivirus, anti-spyware plus anti-adware programs and have them updated on a regular basis. This can help stop windows XP running slow.<br><br>Google Chrome crashes on Windows 7 if the registry entries are improperly modified. Missing registry keys or registry keys with improper values may lead to runtime mistakes and thereby the problem occurs. We are recommended to scan the whole system registry and review the result. Attempt the registry repair procedure utilizing third-party [http://bestregistrycleanerfix.com/tune-up-utilities tuneup utilities] software.<br><br>Files with the DOC extension are additionally susceptible to viruses, nevertheless this is solved by advantageous antivirus programs. Another problem is that .doc files might be corrupted, unreadable or damaged due to spyware, adware, plus malware. These cases can prevent users from correctly opening DOC files. This really is when powerful registry cleaners become practical.<br><br>In alternative words, if your PC has any corrupt settings inside the registry database, these settings might make your computer run slower plus with a great deal of errors. And unfortunately, it's the case which XP is prone to saving countless settings from the registry inside the incorrect method, creating them unable to run properly, slowing it down plus causing a lot of errors. Each time we utilize your PC, it has to read 100's of registry settings... plus there are often numerous files open at once which XP gets confuse and saves numerous in the wrong method. Fixing these damaged settings may boost the speed of your program... and to do which, we should look to employ a 'registry cleaner'.<br><br>Registry cleaners may help a computer run inside a better mode. Registry products ought to be part of the standard scheduled maintenance system for the computer. You don't have to wait forever for a computer or the programs to load plus run. A small repair can bring back the speed you lost.
In [[computer vision]], '''image segmentation''' is the process of partitioning a [[digital image]] into multiple segments ([[Set (mathematics)|sets]] of [[pixel]]s, also known as superpixels).  The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze.<ref name="computervision">Linda G. Shapiro and George C. Stockman (2001):  “Computer Vision”, pp 279-325, New Jersey, Prentice-Hall, ISBN 0-13-030796-3</ref>  <ref> Barghout, Lauren, and Lawrence W. Lee. "Perceptual information processing system." Paravue Inc. U.S. Patent Application 10/618,543, filed July 11, 2003. </ref> Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images.  More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain visual characteristics.
 
The result of image segmentation is a set of segments that collectively cover the entire image, or a set of [[Contour line|contour]]s extracted from the image (see [[edge detection]]). Each of the pixels in a region are similar with respect to some characteristic or computed property, such as [[color]], [[luminous intensity|intensity]], or [[Image texture|texture]]. Adjacent regions are significantly different with respect to the same characteristic(s).<ref name="computervision" />
When applied to a stack of images, typical in [[medical imaging]], the resulting contours after image segmentation can be used to create 3D reconstructions with the help of interpolation algorithms like [[Marching cubes]].
 
== Applications ==
 
Some of the practical applications of image segmentation are:
 
* [[Content-based image retrieval]]
* [[Machine vision]]
* [[Medical imaging]]<ref>{{cite journal | last1 = Pham | first1 = Dzung L. | last2 = Xu | first2 = Chenyang | last3 = Prince | first3 = Jerry L. | year = 2000 | title = Current Methods in Medical Image Segmentation | url = | journal = Annual Review of Biomedical Engineering | volume = 2 | issue = | pages = 315–337 | pmid = 11701515 | doi = 10.1146/annurev.bioeng.2.1.315 }}</ref>
** Locate tumors and other pathologies
** Measure tissue volumes
** Diagnosis, study of anatomical structure
* [[Object detection]]
** [[Pedestrian detection]]
** [[Face detection]]
** Brake light detection
** Locate objects in satellite images (roads, forests, crops, etc.)
* Recognition Tasks
** [[Face recognition]]
** [[Fingerprint recognition]]
** [[Iris recognition]]
* Traffic control systems
* [[Video surveillance]]
 
Several general-purpose [[algorithm]]s and techniques have been developed for image segmentation.  To be useful, these techniques must typically be combined with a domain's specific knowledge in order to effectively solve the domain's segmentation problems.
 
==Thresholding==
 
The simplest method of image segmentation is called the [[Thresholding (image processing)|thresholding]] method. This method is based on a clip-level (or a threshold value) to turn a gray-scale image into a binary image.
 
The key of this method is to select the threshold value (or values when multiple-levels are selected). Several popular methods are used in industry including the maximum entropy method, [[Otsu's method]] (maximum variance), and [[k-means]] clustering.
 
Recently, methods have been developed for thresholding computed tomography (CT) images. The key idea is that, unlike [[Otsu's method]], the thresholds are derived from the radiographs instead of the (reconstructed) image <ref>K J. Batenburg, and J. Sijbers, "Adaptive thresholding of tomograms by projection distance minimization", Pattern Recognition, vol. 42, no. 10, pp. 2297-2305, April, 2009 [http://www.visielab.ua.ac.be/publications/adaptive-thresholding-tomograms-projection-distance-minimization http://dx.doi.org/10.1016/j.patcog.2008.11.027]</ref>
.<ref>K J. Batenburg, and J. Sijbers, "Optimal Threshold Selection for Tomogram Segmentation by Projection Distance Minimization", IEEE Transactions on Medical Imaging, vol. 28, no. 5, pp. 676-686, June, 2009 [http://www.visielab.ua.ac.be/publications/optimal-threshold-selection-tomogram-segmentation-projection-distance-minimization Download paper]</ref>
 
== Clustering methods ==
{{multiple image
<!-- Essential parameters -->
| align    = right
| direction = vertical
| width    = 300
| image1    = Polarlicht 2.jpg
| alt1      = Original image
| caption1  = Source image.
| image2    = Polarlicht 2 kmeans 16 large.png
| alt2      = Processed image
| caption2  = Image after running ''k''-means with ''k = 16''. Note that a common technique to improve performance for large images is to downsample the image, compute the clusters, and then reassign the values to the larger image if necessary.
}}
The [[K-means algorithm]] is an [[iterative]] technique that is used to [[Cluster analysis|partition an image]] into ''K'' clusters.  <ref> Barghout, Lauren, and Jacob Sheynin. "Real-world scene perception and perceptual organization: Lessons from Computer Vision." Journal of Vision 13.9 (2013): 709-709. </ref>The basic [[algorithm]] is:
 
# Pick ''K'' cluster centers, either [[random]]ly or based on some [[heuristic]]
# Assign each pixel in the image to the cluster that minimizes the [[distance]] between the pixel and the cluster center
# Re-compute the cluster centers by averaging all of the pixels in the cluster
# Repeat steps 2 and 3 until convergence is attained (i.e. no pixels change clusters)
 
In this case, [[distance]] is the squared or absolute difference between a pixel and a cluster center. The difference is typically based on pixel [[Hue|color]], [[Brightness|intensity]], [[Texture (computer graphics)|texture]], and location, or a weighted combination of these factors. ''K'' can be selected manually, [[random]]ly, or by a [[heuristic]].  This algorithm is guaranteed to converge, but it may not return the [[Global optimum|optimal]] solution. The quality of the solution depends on the initial set of clusters and the value of ''K''.
 
== Compression-based methods ==
 
Compression based methods postulate that the optimal segmentation is the one that minimizes, over all possible segmentations, the coding length of the data.<ref>Hossein Mobahi, Shankar Rao, Allen Yang, Shankar Sastry and Yi Ma.
[http://perception.csl.illinois.edu/coding/papers/MobahiH2011-IJCV.pdf Segmentation of Natural Images by Texture and Boundary Compression], International Journal of Computer Vision (IJCV), 95 (1), pg. 86-98, Oct. 2011.</ref><ref>Shankar Rao, Hossein Mobahi, Allen Yang, Shankar Sastry and Yi Ma [http://perception.csl.illinois.edu/coding/papers/RaoS2009-ACCV.pdf Natural Image Segmentation with Adaptive Texture and Boundary Encoding], Proceedings of the Asian Conference on Computer Vision (ACCV) 2009, H. Zha, R.-i. Taniguchi, and S. Maybank (Eds.), Part I, LNCS 5994, pp. 135--146, Springer.</ref> The connection between these two concepts is that segmentation tries to find patterns in an image and any regularity in the image can be used to compress it. The method describes each segment by its texture and boundary shape. Each of these components is modeled by a probability distribution function and its coding length is computed as follows:
 
# The boundary encoding leverages the fact that regions in natural images tend to have a smooth contour. This prior is used by [[Huffman coding]] to encode the difference [[chain code]] of the contours in an image. Thus, the smoother a boundary is, the shorter coding length it attains.
# Texture is encoded by [[lossy compression]] in a way similar to [[minimum description length]] (MDL) principle, but here the length of the data given the model is approximated by the number of samples times the [[entropy]] of the model. The texture in each region is modeled by a [[multivariate normal distribution]] whose entropy has closed form expression. An interesting property of this model is that the estimated entropy bounds the true entropy of the data from above. This is because among all distributions with a given mean and covariance, normal distribution has the largest entropy. Thus, the true coding length cannot be more than what the algorithm tries to minimize.
 
For any given segmentation of an image, this scheme yields the number of bits required to encode that image based on the given segmentation. Thus, among all possible segmentations of an image, the goal is to find the segmentation which produces the shortest coding length. This can be achieved by a simple agglomerative clustering method. The distortion in the lossy compression determines the coarseness of the segmentation and its optimal value may differ for each image. This parameter can be estimated heuristically from the contrast of textures in an image. For example, when the textures in an image are similar, such as in camouflage images, stronger sensitivity and thus lower quantization is required.
 
== Histogram-based methods ==
 
[[Histogram]]-based methods are very efficient when compared to other image segmentation methods because they typically require only one pass through the [[pixel]]s.  In this technique, a histogram is computed from all of the pixels in the image, and the peaks and valleys in the histogram are used to locate the [[Cluster analysis|clusters]] in the image.<ref name="computervision" /> [[Hue|Color]] or [[Brightness|intensity]] can be used as the measure.
 
A refinement of this technique is to [[Recursion (computer science)|recursively]] apply the histogram-seeking method to clusters in the image in order to divide them into smaller clusters.  This is repeated with smaller and smaller clusters until no more clusters are formed.<ref name="computervision" /><ref>{{cite journal | last1 = Ohlander | first1 = Ron | last2 = Price | first2 = Keith | last3 = Reddy | first3 = D. Raj | year = 1978 | title = Picture Segmentation Using a Recursive Region Splitting Method | url = | journal = Computer Graphics and Image Processing | volume = 8 | issue = 3| pages = 313–333 | doi = 10.1016/0146-664X(78)90060-6 }}</ref> 
 
One disadvantage of the histogram-seeking method is that it may be difficult to identify significant peaks and valleys in the image.
 
Histogram-based approaches can also be quickly adapted to occur over multiple frames, while maintaining their single pass efficiency.  The histogram can be done in multiple fashions when multiple frames are considered. The same approach that is taken with one frame can be applied to multiple, and after the results are merged, peaks and valleys that were previously difficult to identify are more likely to be distinguishable.  The histogram can also be applied on a per pixel basis where the information result are used to determine the most frequent color for the pixel location. This approach segments based on active objects and a static environment, resulting in a different type of segmentation useful in [[Video tracking]].
 
==Edge detection==
 
[[Edge detection]] is a well-developed field on its own within image processing.
Region boundaries and edges are closely related,
since there is often a sharp adjustment in intensity at the region boundaries.
Edge detection techniques have therefore been used as the base of another segmentation technique.
 
The edges identified by edge detection are often disconnected.  To segment an object from an image however, one needs closed region boundaries.  The desired edges are the boundaries between such objects.
 
Segmentation methods can also be applied to edges obtained from edge detectors. Lindeberg and Li <ref>[http://www.csc.kth.se/cvap/abstracts/cvap186.html T. Lindeberg and M.-X. Li "Segmentation and classification of edges using minimum description length approximation and complementary junction cues", Computer Vision and Image Understanding, vol. 67, no. 1, pp. 88--98, 1997.]</ref> developed an integrated method that segments edges into straight and curved edge segments for parts-based object recognition, based on a minimum description length (MDL) criterion that was optimized by a split-and-merge-like method with candidate breakpoints obtained from complementary junction cues to obtain more likely points at which to consider partitions into different segments.
 
== Region-growing methods ==
 
The first [[region-growing]] method was the seeded region growing method. This method takes a set of seeds as input along with the image. The seeds mark each of the objects to be segmented. The regions are iteratively grown by comparing all unallocated neighboring pixels to the regions. The difference between a pixel's intensity value and the region's mean,  <math>\delta</math>, is used as a measure of similarity. The pixel with the smallest difference measured this way is allocated to the respective region. This process continues until all pixels are allocated to a region.
 
Seeded region growing requires seeds as additional input. The segmentation results are dependent on the choice of seeds. Noise in the image can cause the seeds to be poorly placed. Unseeded region growing is a modified algorithm that doesn't require explicit seeds. It starts off with a single region <math>A_1</math> – the pixel chosen here does not significantly influence final segmentation. At each iteration it considers the neighboring pixels in the same way as seeded region growing. It differs from seeded region growing in that if the minimum  <math>\delta</math> is less than a predefined threshold <math>T</math> then it is added to the respective region <math>A_j</math>. If not, then the pixel is considered significantly different from all current regions <math>A_i</math> and a new region <math>A_{n+1}</math> is created with this pixel.
 
One variant of this technique, proposed by [[Haralick]] and Shapiro (1985),<ref name="computervision" /> is based on pixel [[Brightness|intensities]].  The [[Arithmetic mean|mean]] and [[scatter]] of the region and the intensity of the candidate pixel is used to compute a test statistic.  If the test statistic is sufficiently small, the pixel is added to the region, and the region’s mean and scatter are recomputed.  Otherwise, the pixel is rejected, and is used to form a new region.
 
A special region-growing method is called <math>\lambda</math>-connected segmentation (see also [[lambda-connectedness]]). It is based on pixel [[Brightness|intensities]] and neighborhood-linking paths. A degree of connectivity (connectedness) will be calculated based on a path that is formed by pixels. For a certain value of <math>\lambda</math>, two pixels are called <math>\lambda</math>-connected if there is a path linking those two pixels and the connectedness of this path is at least <math>\lambda</math>.  <math>\lambda</math>-connectedness is an equivalence relation.<ref name="lambda-connectedness">L. Chen, H.D. Cheng, and J. Zhang, Fuzzy subfiber and its application to seismic lithology classification, Information Sciences: Applications, Vol 1, No 2, pp 77-95, 1994.</ref>
 
==Split-and-merge methods==
 
Split-and-merge segmentation is based on a [[quadtree]] partition of an image. It is sometimes called quadtree segmentation.
 
This method starts at the root of the tree that represents the whole image. If it is found non-uniform (not homogeneous), then it is split into four son-squares (the splitting process), and so on so forth. Conversely, if four son-squares are homogeneous, they can be merged as several connected components (the merging process). The node in the tree is a segmented node. This process continues recursively until no further splits or merges are possible.<ref name="split-and-merge1">S.L. Horowitz and T. Pavlidis, Picture Segmentation by a Directed Split and Merge Procedure, Proc. ICPR, 1974, Denmark, pp.424-433.</ref><ref name="split-and-merge2">S.L. Horowitz and T. Pavlidis, Picture Segmentation by a Tree Traversal Algorithm, Journal of the ACM, 23 (1976), pp. 368-388.</ref>  When a special data structure is involved in the implementation of the algorithm of the method, its time complexity can reach <math>O(n\log n)</math>, an optimal algorithm of the method.<ref name="split-and-merge3">L. Chen, [http://www.spclab.com/research/lambda/lambdaConn91.pdf  The lambda-connected segmentation and the optimal algorithm for split-and-merge segmentation], Chinese J. Computers, 14(1991), pp 321-331</ref>
 
== Partial differential equation-based methods ==
Using a [[partial differential equation]] (PDE)-based method and solving the PDE equation by a numerical scheme, one can segment the image. Curve propagation is a popular technique in this category, with numerous applications to object extraction, object tracking, stereo reconstruction, etc. The central idea is to evolve an initial curve towards the lowest potential of a cost function, where its definition reflects the task to be addressed. As for most [[inverse problems]], the minimization of the cost functional is non-trivial and imposes certain smoothness constraints on the solution, which in the present case can be expressed as geometrical constraints on the evolving curve.
 
=== Parametric methods ===
[[Lagrangian]] techniques are based on parameterizing the contour according to some sampling strategy and then evolve each element according to image and internal terms. Such techniques are fast and efficient, however the original "purely parametric" formulation (due to Kass and Terzopoulos in 1987 and known as "[[Snake (computer vision)|snakes]]"), is generally criticized for its limitations regarding the choice of sampling strategy, the internal geometric properties of the curve, topology changes (curve splitting and merging), addressing problems in higher dimensions, etc.. Nowadays, efficient "discretized" formulations have been developed to address these limitations while maintaining high efficiency. In both cases, energy minimization is generally conducted using a steepest-gradient descent, whereby derivatives are computed using, e.g., finite differences.
 
=== Level set methods ===
The level set method was initially proposed to track moving interfaces by Osher and Sethian in 1988 and has spread across various imaging domains in the late nineties. It can be used to efficiently address the problem of curve/surface/etc. propagation in an implicit manner. The central idea is to represent the evolving contour using a signed function, where its zero level corresponds to the actual contour. Then, according to the motion equation of the contour, one can easily derive a similar flow for the implicit surface that when applied to the zero-level will reflect the propagation of the contour. The level set method encodes numerous advantages: it is implicit, parameter free, provides a direct way to estimate the geometric properties of the evolving structure, can change the topology and is intrinsic. Furthermore, they can be used to define an optimization framework as proposed by Zhao, Merriman and Osher in 1996. Therefore, one can conclude that it is a very convenient framework to address numerous applications of computer vision and medical image analysis.<ref>S. Osher and N. Paragios.
[http://www.mas.ecp.fr/vision/Personnel/nikos/osher-paragios/ Geometric Level Set Methods in Imaging Vision and Graphics], Springer Verlag, ISBN 0-387-95488-0, 2003.</ref> Furthermore, research into various [[level set data structures]] has led to very efficient implementations of this method.
 
=== Fast marching methods ===
The [[fast marching method]] has been used in image segmentation,<ref>{{cite web|url=http://math.berkeley.edu/~sethian/2006/Applications/Medical_Imaging/artery.html|title=Segmentation in Medical Imaging|author=James A. Sethian|accessdate=15 January 2012}}</ref> and this model has been improved (permitting a both positive and negative speed propagation speed) in an approach called the generalized fast marching method.<ref>{{Citation|
journal=Numerical Algorithms|
date=July 2008|volume=48|issue=1-3|pages=189–211|
title=Generalized fast marching method: applications to image segmentation|
first=Nicolas|last=Forcade | first2=Carole | last2=Le Guyader | first3= Christian | last3= Gout|
url=http://rd.springer.com/article/10.1007/s11075-008-9183-x }}</ref>
 
== Graph partitioning methods ==
 
[[Graph (data structure)|Graph]] partitioning methods can effectively be used for image segmentation. In these methods, the image is modeled as a weighted, [[undirected graph]]. Usually a pixel or a group of pixels are associated with [[Vertex (graph theory)|nodes]] and [[Glossary of graph theory#Basics|edge]] weights define the (dis)similarity between the neighborhood pixels. The graph (image) is then partitioned according to a criterion designed to model "good" clusters.  Each partition of the nodes (pixels) output from these algorithms are considered an object segment in the image. Some popular algorithms of this category are normalized cuts,<ref>Jianbo Shi and [[Jitendra Malik]] (2000): [http://www.cs.cmu.edu/~jshi/papers/pami_ncut.pdf "Normalized Cuts and Image Segmentation"], ''IEEE Transactions on pattern analysis and machine intelligence'', pp 888-905, Vol. 22, No. 8</ref> [[random walker (computer vision)|random walker]],<ref>Leo Grady (2006): [http://www.cns.bu.edu/~lgrady/grady2006random.pdf "Random Walks for Image Segmentation"], ''IEEE Transactions on Pattern Analysis and Machine Intelligence'', pp. 1768–1783, Vol. 28, No. 11</ref> minimum cut,<ref>Z. Wu and R. Leahy (1993): [ftp://sipi.usc.edu/pub/leahy/pdfs/MAP93.pdf "An optimal graph theoretic approach to data clustering: Theory and its application to image segmentation"], ''IEEE Transactions on Pattern Analysis and Machine Intelligence'', pp. 1101–1113, Vol. 15, No. 11</ref> isoperimetric partitioning <ref>Leo Grady and Eric L. Schwartz (2006): [http://www.cns.bu.edu/~lgrady/grady2006isoperimetric.pdf "Isoperimetric Graph Partitioning for Image Segmentation"], ''IEEE Transactions on Pattern Analysis and Machine Intelligence'', pp. 469–475, Vol. 28, No. 3</ref> and [[minimum spanning tree-based segmentation]].<ref>C. T. Zahn (1971): [http://web.cse.msu.edu/~cse802/Papers/zahn.pdf "Graph-theoretical methods for detecting and describing gestalt clusters"], ''IEEE Transactions on Computers'', pp. 68–86, Vol. 20, No. 1</ref>
 
== Watershed transformation ==
 
The [[Watershed (algorithm)|watershed transformation]] considers the gradient magnitude of an image as a topographic surface. Pixels having the highest gradient magnitude intensities (GMIs) correspond to watershed lines, which represent the region boundaries. Water placed on any pixel enclosed by a common watershed line flows downhill to a common local intensity minimum (LIM). Pixels draining to a common minimum form a catch basin, which represents a segment.
 
== Model based segmentation ==
 
The central assumption of such an approach is that structures of interest/organs have a repetitive form of geometry. Therefore, one can seek for a probabilistic model towards explaining the variation of the shape of the organ and then when segmenting an image impose constraints using this model as prior. Such a task involves (i) registration of the training examples to a common pose, (ii) probabilistic representation of the variation of the registered samples, and (iii) statistical inference between the model and the image. State of the art methods in the literature for knowledge-based segmentation involve active shape and appearance models, active contours and deformable templates and level-set based methods. {{citation needed|date=December 2012}}
 
== Multi-scale segmentation ==
 
Image segmentations are computed at multiple scales in [[scale space]] and sometimes propagated from coarse to fine scales; see [[scale-space segmentation]].
 
Segmentation criteria can be arbitrarily complex and may take into account global as well as local criteria. A common requirement is that each region must be connected in some sense.
 
===One-dimensional hierarchical signal segmentation===
 
Witkin's seminal work<ref>Witkin, A. P. "Scale-space filtering", Proc. 8th Int. Joint Conf. Art. Intell., Karlsruhe, Germany,1019–1022, 1983.</ref><ref>A. Witkin, "Scale-space filtering: A new approach to multi-scale description," in Proc. IEEE Int. Conf. Acoust., Speech, Signal Processing ([[ICASSP]]), vol. 9, San Diego, CA, Mar. 1984, pp. 150–153.</ref> in scale space included the notion that a one-dimensional signal could be unambiguously segmented into regions, with one scale parameter controlling the scale of segmentation.
 
A key observation is that the zero-crossings of the second derivatives (minima and maxima of the first derivative or slope) of multi-scale-smoothed versions of a signal form a nesting tree, which defines hierarchical relations between segments at different scales. Specifically, slope extrema at coarse scales can be traced back to corresponding features at fine scales. When a slope maximum and slope minimum annihilate each other at a larger scale, the three segments that they separated merge into one segment, thus defining the hierarchy of segments.
 
===Image segmentation and primal sketch====
 
There have been numerous research works in this area, out of which a few have now reached a state where they can be applied either with interactive manual intervention (usually with application to medical imaging) or fully automatically. The following is a brief overview of some of the main research ideas that current approaches are based upon.
 
The nesting structure that Witkin described is, however, specific for one-dimensional signals and does not trivially transfer to higher-dimensional images. Nevertheless, this general idea has inspired several other authors to investigate coarse-to-fine schemes for image segmentation. Koenderink<ref>Koenderink, Jan "The structure of images", Biological Cybernetics, 50:363–370, 1984</ref> proposed to study how iso-intensity contours evolve over scales and this approach was investigated in more detail by Lifshitz and Pizer.<ref>[http://portal.acm.org/citation.cfm?id=80964&dl=GUIDE&coll=GUIDE Lifshitz, L. and Pizer, S.: A multiresolution hierarchical approach to image segmentation based on intensity extrema, IEEE Transactions on Pattern Analysis and Machine Intelligence, 12:6, 529–540, 1990.]</ref>
Unfortunately, however, the intensity of image features changes over scales, which implies that it is hard to trace coarse-scale image features to finer scales using iso-intensity information.
 
Lindeberg<ref>[http://www.nada.kth.se/~tony/abstracts/Lin92-IJCV.html Lindeberg, T.: Detecting salient blob-like image structures and their scales with a scale-space primal sketch: A method for focus-of-attention, International Journal of Computer Vision, 11(3), 283–318, 1993.]</ref><ref name=lin94>[http://www.nada.kth.se/~tony/book.html Lindeberg, Tony, Scale-Space Theory in Computer Vision, Kluwer Academic Publishers, 1994], ISBN 0-7923-9418-6</ref> studied the problem of linking local extrema and saddle points over scales, and proposed an image representation called the scale-space primal sketch which makes explicit the relations between structures at different scales, and also makes explicit which image features are stable over large ranges of scale including locally appropriate scales for those. Bergholm  proposed to detect edges at coarse scales in scale-space and then trace them back to finer scales with manual choice of both the coarse detection scale and the fine localization scale.
 
Gauch and Pizer<ref>[http://portal.acm.org/citation.cfm?coll=GUIDE&dl=GUIDE&id=628490 Gauch, J. and Pizer, S.: Multiresolution analysis of ridges and valleys in grey-scale images, IEEE Transactions on Pattern Analysis and Machine Intelligence, 15:6  (June 1993), pages: 635–646, 1993.]</ref> studied the complementary problem of ridges and valleys at multiple scales and developed a tool for interactive image segmentation based on multi-scale watersheds. The use of multi-scale watershed with application to the gradient map has also been investigated by Olsen and Nielsen<ref>Olsen,  O. and Nielsen, M.: Multi-scale gradient magnitude watershed segmentation, Proc. of ICIAP 97, Florence, Italy, Lecture Notes in Computer Science, pages 6–13. Springer Verlag, September 1997.</ref> and been carried over to clinical use by Dam<ref>Dam, E., Johansen, P., Olsen, O. Thomsen,, A. Darvann, T. , Dobrzenieck, A., Hermann, N., Kitai, N., Kreiborg, S., Larsen, P., Nielsen, M.: "Interactive multi-scale segmentation in clinical use" in European Congress of Radiology 2000.</ref>
Vincken et al.<ref>Vincken, K., Koster, A. and Viergever, M.: {{doi-inline|10.1109/34.574787|Probabilistic multiscale image segmentation}},  IEEE Transactions on Pattern Analysis and Machine Intelligence, 19:2, pp. 109–120, 1997.]</ref> proposed a hyperstack for defining probabilistic relations between image structures at different scales. The use of stable image structures over scales has been furthered by Ahuja<ref>[http://vision.ai.uiuc.edu/~msingh/segmen/seg/MSS.html M. Tabb and N. Ahuja, Unsupervised multiscale image segmentation by integrated edge and region detection, IEEE Transactions on Image Processing, Vol. 6, No. 5, 642–655, 1997.]</ref><ref>[http://www.springerlink.com/content/44627w1458284738/ E. Akbas and N. Ahuja, "From ramp discontinuities to segmentation tree"]</ref> and his co-workers into a fully automated system. A fully automatic brain segmentation algorithm based on closely related ideas of multi-scale watersheds has been presented by Undeman and Lindeberg <ref>[http://www.csc.kth.se/cvap/abstracts/cvap285.html C. Undeman and T. Lindeberg (2003) "Fully Automatic Segmentation of MRI Brain Images using Probabilistic Anisotropic Diffusion and Multi-Scale Watersheds", Proc. Scale-Space'03, Isle of Skye, Scotland, Springer Lecture Notes in Computer Science, volume 2695, pages 641--656.]</ref> and been extensively tested in brain databases.
 
These ideas for multi-scale image segmentation by linking image structures over scales have also been picked up by Florack and Kuijper.<ref>Florack, L. and Kuijper, A.: The topological structure of scale-space images, Journal of Mathematical Imaging and Vision, 12:1, 65–79, 2000.</ref> Bijaoui and Rué<ref>[http://dx.doi.org/10.1016/0165-1684(95)00093-4 Bijaoui, A., Rué, F.: 1995, A Multiscale Vision Model, ''Signal Processing'' '''46''', 345]</ref> associate structures detected in scale-space above a minimum noise threshold into an object tree which spans multiple scales and corresponds to a kind of feature in the original signal. Extracted features are accurately reconstructed using an iterative conjugate gradient matrix method.
 
== Semi-automatic segmentation ==
 
In this kind of segmentation, the user outlines the region of interest with the mouse clicks and algorithms are applied so that the path that best fits the edge of the image is shown.  
 
Techniques like [[Simple Interactive Object Extraction|SIOX]], [[Livewire Segmentation Technique|Livewire]], Intelligent Scissors or IT-SNAPS are used in this kind of segmentation.
 
== Trainable segmentation ==
Most segmentation methods are based only on color information of pixels in the image. Humans use much more knowledge than this when doing image segmentation, but implementing this knowledge would cost considerable computation time and would require a huge domain-knowledge database, which is currently not available. In addition to traditional segmentation methods, there are trainable segmentation methods which can model some of this knowledge.
 
Neural Network segmentation relies on processing small areas of an image using an [[artificial neural network]]<ref name="Transactions on Engineering, Computing and Technology">[[Mahinda Pathegama]] & Ö Göl (2004): "Edge-end pixel extraction for edge-based image segmentation", ''Transactions on Engineering, Computing and Technology,'' vol. 2, pp 213–216, ISSN 1305-5313</ref> or a set of neural networks. After such processing the decision-making mechanism marks the areas of an image accordingly to the category recognized by the neural network. A type of network designed especially for this is the [[Kohonen map]].
 
[[Pulse-coupled networks|Pulse-coupled neural networks (PCNNs)]] are neural models proposed by modeling a cat’s visual cortex and developed for high-performance biomimetic image processing.
In 1989, Eckhorn introduced a neural model to emulate the mechanism of a cat’s visual cortex. The Eckhorn model provided a simple and effective tool for studying the visual cortex of small mammals, and was soon recognized as having significant application potential in image processing. In 1994, the Eckhorn model was adapted to be an image processing algorithm by Johnson, who termed this algorithm Pulse-Coupled Neural Network. Over the past decade, PCNNs have been utilized for a variety of image processing applications, including: image segmentation, feature generation, face extraction, motion detection, region growing, noise reduction, and so on.
A PCNN is a two-dimensional neural network. Each neuron in the network corresponds to one pixel in an input image, receiving its corresponding pixel’s color information (e.g. intensity) as an external stimulus. Each neuron also connects with its neighboring neurons, receiving local stimuli from them. The external and local stimuli are combined in an internal activation system, which accumulates the stimuli until it exceeds a dynamic threshold, resulting in a pulse output. Through iterative computation, PCNN neurons produce temporal series of pulse outputs. The temporal series of pulse outputs contain information of input images and can be utilized for various image processing applications, such as image segmentation and feature generation. Compared with conventional image processing means, PCNNs have several significant merits, including robustness against noise, independence of geometric variations in input patterns, capability of bridging minor intensity variations in input patterns, etc.
 
'''Open-source implementations of trainable segmentation''':
* [http://fiji.sc/wiki/index.php/Trainable_Segmentation_Plugin Trainable Segmentation Plugin]
* [http://www.burgsys.com/image-processing-software-free.php IMMI]
 
== Segmentation benchmarking ==
 
Several segmentation benchmarks are available for comparing the performance of segmentation methods with the state-of-the-art segmentation methods on standardized sets
* [http://mosaic.utia.cas.cz Prague On-line Texture Segmentation Benchmark]<ref>Haindl, M. – Mikeš, S. [http://dx.doi.org/10.1109/ICPR.2008.4761118 Texture Segmentation Benchmark], Proc. of the 19th Int. Conference on Pattern Recognition. IEEE Computer Society, 2008, pp. 1–4 ISBN 978-1-4244-2174-9 ISSN 1051-4651</ref>
* [http://www.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/ The Berkeley Segmentation Dataset and Benchmark]<ref>
{{cite conference
| url          =
| title        =A Database of Human Segmented Natural Images and its Application to Evaluating Segmentation Algorithms and Measuring Ecological Statistics
| author        = D.Martin
| coauthors    = C. Fowlkes and D. Tal and J. Malik
|date=July 2001
| volume        = 2
| booktitle    = Proc. 8th Int'l Conf. Computer Vision
| location      =
| pages        = 416–423
}}
</ref>
 
== See also ==
 
* [[Computer vision]]
* [[Data clustering]]
* [[Graph theory]]
* [[Histogram]]s
* [[Image-based meshing]]
* [[K-means algorithm]]
* [[Pulse-coupled networks]]
* [[Range image segmentation]]
* [[Region growing]]
* [[Balanced histogram thresholding]]
 
==External links==
* [http://csc.fsksm.utm.my/syed/projects/image-processing.html Some sample code that performs basic segmentation], by Syed Zainudeen. University Technology of Malaysia.
* [http://rd.springer.com/article/10.1007/s11075-008-9183-x Generalized Fast Marching method] by Forcadel et al. [2008] for applications in image segmentation.
* [http://www.iprg.co.in Image Processing Research Group] An Online Open Image Processing Research Community.
* [https://www.mathworks.com/discovery/image-segmentation.html Segmentation methods in image processing and analysis]
 
== References ==
{{reflist|2}}
;Notes
{{refbegin}}
* [http://instrumentation.hit.bg/Papers/2008-02-02%203D%20Multistage%20Entropy.htm 3D Entropy Based Image Segmentation]
*{{cite journal|last=Frucci| first=Maria|coauthors= Sanniti di Baja, Gabriella| year=2008|title=From Segmentation to Binarization of Gray-level Images|journal=[http://www.jprr.org/index.php/jprr Journal of Pattern Recognition Research]|volume=3|issue=1|pages=1–13|url=http://www.jprr.org/index.php/jprr/article/view/54/16}}
{{refend}}
 
{{DEFAULTSORT:Segmentation (Image Processing)}}
[[Category:Image segmentation|*]]

Latest revision as of 07:22, 13 January 2015

"Why does my computer keep freezing up?" I was asked by a great deal of people the cause of their computer freeze problems. And I am fed up with spending much time in answering the query time plus time again. This article is to tell you the real cause of your PC Freezes.

You may find which there are registry products that are free plus those which you will have to pay a nominal sum for. Some registry cleaners offer a bare bones program for free with the way of upgrading to a more advanced, powerful variation of the same system.

Registry cleaning is significant considering the registry can get crowded plus messy whenever it's left unchecked. False entries send the operating system searching for files and directories that have long ago been deleted. This takes time and utilizes precious resources. So, a slowdown inevitably takes place. It is specifically noticeable whenever you multitask.

Always see to it which you have installed antivirus, anti-spyware plus anti-adware programs and have them updated on a regular basis. This can help stop windows XP running slow.

Google Chrome crashes on Windows 7 if the registry entries are improperly modified. Missing registry keys or registry keys with improper values may lead to runtime mistakes and thereby the problem occurs. We are recommended to scan the whole system registry and review the result. Attempt the registry repair procedure utilizing third-party tuneup utilities software.

Files with the DOC extension are additionally susceptible to viruses, nevertheless this is solved by advantageous antivirus programs. Another problem is that .doc files might be corrupted, unreadable or damaged due to spyware, adware, plus malware. These cases can prevent users from correctly opening DOC files. This really is when powerful registry cleaners become practical.

In alternative words, if your PC has any corrupt settings inside the registry database, these settings might make your computer run slower plus with a great deal of errors. And unfortunately, it's the case which XP is prone to saving countless settings from the registry inside the incorrect method, creating them unable to run properly, slowing it down plus causing a lot of errors. Each time we utilize your PC, it has to read 100's of registry settings... plus there are often numerous files open at once which XP gets confuse and saves numerous in the wrong method. Fixing these damaged settings may boost the speed of your program... and to do which, we should look to employ a 'registry cleaner'.

Registry cleaners may help a computer run inside a better mode. Registry products ought to be part of the standard scheduled maintenance system for the computer. You don't have to wait forever for a computer or the programs to load plus run. A small repair can bring back the speed you lost.