by Y.K.Lee and Kelvin Hoon
It was only in 1905 when a quantitative analysis was brought about, where Einstein[1] succeeded in stating the mathematical laws governing the movements of particles on the basis of the principles of the kinetic-molecular theory of heat. According to this theory, bodies of microscopically visible size suspended in a liquid will perform irregular thermal movements called Brownian molecular motion, which can be easily observed in a microscope. Brownian motion was then more generally accepted because it could now be treated as a practical mathematical model. As a result, many scientific theories and applications related to it have been developed and they subsequently play major roles in the world of Physics.
Fig. 1. Brownian motion of a microscopic particle
In an undissociated dilute solution, there is a process of diffusion(*), which is caused by the Brownian motion of the suspended thermal molecules. On the other hand, another process proceeding in the opposite direction of that of the diffusion one also occurs. This movement of suspended substances is brought about by osmotic pressure forces(+).
The first step in the investigation of Brownian motion is to show how the process of diffusion depends on the distribution of osmotic pressure in the solution. The relationship between the diffusion and the mobility of the dissolved substance in the solvent is also to be found.
In his investigation, Einstein explained the above relationships by using a cylindrical vessel containing a dilute solution of two different concentrations. A movable piston is used as a semi-permeable partition to divide the solution. This would allow both the diffusion and the osmosis processes to take place. Osmotic differences exist as a result of the variation in concentrations. This phenomenon creates an osmotic pressure force that brings about the equalization of the concentrations in diffusion. Therefore, osmotic pressure can be looked upon as the driving force in diffusion cases. A mathematical evaluation of this phenomenon, based on the kinetic molecular theory of heat, produced an expression for the diffusion coefficient. This coefficient was found to be independent of the nature of the solution except for the viscosity of the solvent and for the size of the solute molecules.
The expression of a diffusion process, as discussed above, is eventually related to the irregular motion of the solute particles, with the aid of the same vessel model. The molecular theory of heat also affords a second point of view that the individual molecules of a liquid will alter their positions in a random manner. This wandering about of the particles concerned will result in a uniform distribution of concentration of solute from the non-uniform one, which is a diffusion process. Detailed mathematical procedures presented by Einstein show that the average magnitude of the random motions of solute particles can be calculated from the diffusion coefficient. Alternatively, with the results derived from the previous step, this measure can also be obtained from the viscosity of the solvent, the size of the solute as well as the absolute temperature. Thus, the relationship between the path described by solute particles in a solution and the process of diffusion had been established.
According to the molecular kinetic conception, there exists no essential difference between a solute molecule and a suspended particle. Hence, the elementary theory of Brownian motion can be applied to any kind of small suspended spherical particles.
(*)diffusion is a process of interpenetration between two substances, without chemical combination, by the natural movement of the particles.
(+)osmosis is the passage of a solvent from a less concentrated into a more concentrated solution through a
semi-permeable membrane, i.e permeable to the solvent but not to the solution.
Before we dwell into the detailed applications of Brownian motion, the concept 'fractal' has to be introduced, as it plays a major part in many important applications of our subject.
The concept 'fractal' was introduced by IBM researcher Benoit B.Mandelbrot[2] nearly two decades ago. Expressed in its simplest form, 'fractals' refer to images in the real world which tend to consist of many complex patterns that recur at various sizes.
Mandelbrot proposed the idea of a fractal (short for "fractional dimension") as a way to cope with problems of scale in the real world. He defined a fractal to be any curve or surface that is independent of scale. This property, referred to as self-similarity, means that any portion of the curve, if blown up in scale, would appear identical to the whole curve. Thus the transition from one scale to another can be represented as iterations of a scaling process (e.g. Fig. 2).
Fig. 2. Forming a cross by iteration of a simple procedure.
An important difference between fractal curves and the idealized curves that are normally applied to natural processes is that fractals are nowhere differentiable. That is, although they are continuous (smooth), they are "kinked" everywhere. Fractals can be characterized by the way in which representation of their structure changes with changing scale.
The notion of "fractional dimension" provides a way to measure how rough fractal curves are. We normally consider lines to have a dimension of 1, surfaces a dimension of 2 and solids a dimension of 3. However, a rough curve (say) wanders around on a surface; in the extreme it may be so rough that it effectively fills the surface on which it lies. Very convoluted surfaces, such as a tree's foliage or the internal surfaces of lungs, may effectively be three-dimensional structures. We can therefore think of roughness as an increase in dimension: a rough curve has a dimension between 1 and 2, and a rough surface has a dimension somewhere between 2 and 3. The dimension of a fractal curve is a number that characterizes the way in which the measured length between given points increases as scale decreases. Whilst the topological dimension of a line is always 1 and that of a surface always 2, the fractal dimension may be any real number between 1 and 2.
Fractals are said to be self-similar.The idea of self-similarity means that if we shrink or enlarge a fractal pattern, its appearance should remain unchanged. Conversely, fractal patterns usually arise when simple patterns are transformed repetitively on smaller and smaller scales (e.g. Fig. 2). An important class of processes that produce fractal patterns are random iteration algorithms(like Brownian motion), which produce images of fractal objects. The procedure is akin to using a pen to mark dots at random on a sheet of paper. However, instead of being completely random, the movement of the pen from one position to the next is selected, at random, from a set of rules, each having a fixed probability of being chosen (mathematical details not discussed here).
Brownian Motion is an example of a process that has a fractal dimension of 2. One of its occurrences is in microscopic particles and is the result of random jostling by water molecules (if water is the medium). The path of such a particle is a "random walk" in which both direction and distance are uniformly distributed random variables. So in moving from a given location in space to any other, the path taken by the particle is almost certain to fill the whole space before it reaches the exact point that is the 'destination' (hence the fractal dimension of 2).
One important result of combining the theory of fractals and Brownian motion is the 'fractional Brownian motion model'. This model regards naturally occurring rough surfaces (like mountains, clouds and trees) as the end result of random walks, and utilizes a random iteration algorithm to produce fractal patterns. The applications of this model are widespread, as will be seen in the cited examples.
Another aspect of Brownian motion is its effect on the formation of aggregates such as crystals. Figure 3 shows structures formed under different assumptions about the relative rate of horizontal movement (h) and the probability (p) of a settling particle sticking to fixed particles as it brushes past. In the figure the following values are shown: (a) h=1, p=0; (b) h=1, p=1; (c) h=10, p=0; (d) h=10, p=1. "Sticky" particles (p=1 in the figure) tend to form structures resembling (say) trees or mosses. Such properties are exploited in animation to generate pictures of artificial plants and landscapes.
Fig. 3. Structures arising from Brownian motion of falling particles.
Medical images, like other natural phenomena, have a degree of randomness associated with both the natural random nature of the underlying structure and the random noise superimposed on the image. The fractional Brownian motion model regards natural occuring surfaces as the result of random walks. Thus, an intensity of medical image can be treated fractionally by the Brownian motion model.
Chen and Fox[3] managed to find two applications of fractal analysis in medical imaging, which are given as follows:
i) Classification
Classification refers to the identification of normal and abnormal ultrasonic liver images.
Conventional statistical techniques have always been attempted in the past to distinguish among these images.
For example, Pentland[4] classified the textures of an image by computing the Fourier transform of the image and determining its power spectrum.
He then applied a linear regression technique on the log of the power spectrum to estimate the fractal dimension. However, the fractal concept
suggested by Chen and Fox have a more natural theoretical connection to the underlying processes of image formation. Abandoning the conventional methods,
a normalized fractional Brownian motion feature vector is defined to represent the statistical features of the image surface from the Brownian
motion estimation concept. The objective of this concept is to obtain the average absolute intensity difference of pixel pairs(e.g 7*7 pixel pair) at
different scales. Different ultrasonic images were compared based on the differences among the feature vectors. This is because real surfaces in
medical images are not perfect fractal surfaces and their statistical features cannot be represented by a single value for the fractal dimension.
ii) Edge enhancement
This basic approach was suggested from Pentland[4] for image segmentation and edge detection. Instead of using the Fourier
power spectrum analysis, a transformed image of the liver was obtained by calculating the fractal dimension of each pixel over the whole medical image.
To get the fractal dimension value of each pixel, the calculation for the fractal dimension of a 7*7 pixel block centred on this pixel was recommended.
The fractal dimension distribution appears to hold promise as edge enhancement that does not increase noise in the way that convolution (in Fourier transform)
algorithm do. The transformation can thus enhance the detection of edges over the original image.
These two techniques, although their computations are rather time consuming, could provide a potential noninvasive alternative to 'needle biopsy', which was then the only definitive test for distinguishing among liver abnormalities such as fatty infiltration, hepatitis and cirrhosis. The traditional method of 'needle biopsy' is often contraindicated in patients with liver disease due to coagulation abnormalities, and hence not as effective in identification of the malignant cells.
The work of Basu S and Chan K.S [5] also delivered some preliminary results of a study aimed to assess the actual effectiveness of fractal theory in the area of medical image analysis for texture description. Their specific goal was to utilize fractal dimension to discriminate between normal and cancerous human cells. In particular, they considered four types of cells, namely, breast, bronchial, ovarian and uterine cells. The 'fractional Brownian motion model' was employed to compute the fractal dimension of the different kinds of cells studied. In their experiments with real images (of cells), they concluded that the range of scales (detailed mathematical descriptions not discussed here) over which the cancerous cells exhibit fractal property compared to that of normal, healthy cells differed quite significantly, and hence that property can be used as a discriminatory feature to identify cancerous cells. They proposed that this method can be used for the relatively quick and accurate identification of other forms of malignant cells, and this could prove invaluable to researchers and doctors in the profession.
For their specific work on the movements of an autonomous robot, firstly, it must be known that an autonomous robot has to move with an understanding of its environment. When a robot moves in a natural environment, it is essential to use a terrain modeling technology based on observational depth data obtained for a range finder. The words "terrain modeling" refer to the reconstruction of an elevation map of the terrain of a specific location, and the evaluation of some properties of the terrain such as its roughness. Such a robot was developed by the Carnagie Mellon University, and it was capable of reconstructing a three-dimensional terrain map around it (approximately 10m front and 5m above it), and was able to move quite effectively on a rocky and also a sandy field. The scanning laser rangefinder of the robot was also used to construct a global map of a wide region (several hundred metres of terrain) by gathering small local maps.
The main problem of the research was constructing a three-dimensional terrain map with an arbitrary resolution from a set of irregularly spaced elevation data. This was done based on an interpolation method to preserve the roughness represented by a fractal dimension.
Many methods for generating fractal shapes have been suggested in computer graphics [2]. However, in these methods, it is not possible to reconstruct a shape which passes through or near observational elevation data. However, one method proposed manages to construct a terrain pattern with a high resolution from observed elevation data, and this method is an expansion of the random displacement method which generates a pattern having the property of a fractal Brownian function, derived from the 'fractional Brownian motion model'. Hence, Brownian motion is seen here as another application to the scientific world.
In another similar research by J.Barraquand and J.C.Latombe[7] where a Monte-Carlo algorithm is used to plan the paths of robots with many degrees of freedom, (the algorithm is explained mathematically in the paper), the described algorithm is capable of planning collision-free paths for robots with many degrees of freedom. The algorithm combines gradient and random motions to construct a graph of the local minima of a potential function defined in the robot's configuration space. In order to deal with large neighbourhoods, the gradient motions are generated using a random technique. The random motions are Brownian motions implemented as discretized random walks. The path planning algorithm only keeps track of the path leading to the current configuration and when it believes that it has reached a dead end, it performs a random backtracking to a configuration in the path, hence preventing any collision taking place.
The algorithm has been successfully implemented and tested on many examples involving different types of robots with different degrees of freedom. This is yet another application of the Brownian motion theory.
Rescaled range and the fractional Brownian walks were forwarded and these studies introduced the possibility that extremes of floods and droughts could be fractal. In fact, an extensive study of flood gauge records at more than 1000 water dams and reservoirs indicates a good correlation with fractal statistics.
The volumetric flow of the river is assumed to be a continuous function of time and is therefore treated as a Fourier time series. The characteristics of the series can be studied by just determining its coefficients. These coefficients are associated to a normalized cumulative probability distribution function. By modeling it mathematically, the relation can define either a Brownian walk or a fractional Brownian walk. This technique, so-called power-law statistics, is expected to lead to a far more conservative estimate of future hazards.
If the above technique is to be carried out in large-scale projects, loss of lives and properties due to natural disasters may eventually be much reduced, although the feasibility of such estimations are not firmly guaranteed.
There is always a factor of uncertainty in any economic situation, and in order to make the right investment decisions, or to choose the right business strategy , we require some form of workable hypothesis (that takes into account uncertainty and randomness) to base our decisions upon.
Around 1900, L.Bachelier[8] first proposed that financial markets follow a 'random walk' which can be modeled by standard probability calculus. In the simplest terms, a "random walk" is essentially a Brownian motion where the previous change in the value of a variable is unrelated to future or past changes.
Brownian motion has desirable mathematical characteristics, where statistics can be estimated with great precision, and probabilities can be calculated, and hence scientists and analysts often turn to such an independent process when faced with the analysis of a multidimensional process of unknown origin (ie. the stock market). The Brownian motion theory and Random Walk model are widely applied to the modeling of markets, and the insight that speculation can be modeled by probabilities extends from Bachelier and continues to this day.
In the middle of this century, work done by M.F.M Osborne[9] showed that the logarithms of common-stock prices, and the value of money, can be regarded as an ensemble of decisions in statistical equilibrium, and that this ensemble of logarithms of prices, each varying with time, has a close analogy with the ensemble of coordinates of a large number of molecules. Using a probability distribution function and the prices of the same random stock choice at random times, he was able to derive a steady state distribution function, which is precisely the probability distribution for a particle in Brownian motion. A similar distribution holds for the value of money, measured approximately by stock market indices. Sufficient, but not necessary conditions to derive this distribution quantitatively are given by the conditions of trading, and the Weber-Fechner law. (The Weber-Fechner law states that equal ratios of physical stimulus, for example, sound frequency in vibrations/sec, correspond to equal intervals of subjective sensation, such as pitch. The value of a subjective sensation, like absolute position in physical space, is not measurable, but changes or differences in sensation are, since by experiment they can be equated, and reproduced, thus fulfilling the criteria of measurability).
A consequence of the distribution function is that the expectation values for price itself increases , with increasing time intervals 't', at a rate of 3 to 5 percent per year, with increasing fluctuation, or dispersion, of Price. This secular increase has nothing to do with long-term inflation, or the growth of assets in a capitalistic economy, since the expected reciprocal of price, or number of shares purchasable in the future, per dollar, increases with time in an identical fashion. Thus, it was shown in his paper that prices in the market do vary in a similar fashion to molecules in Brownian motion.
Under heavy loading conditions, i.e. many clients/users are waiting to be served, this scheduling problem can indeed be approximated by a control problem involving Brownian motion. The reason for using the Brownian model is because this scheduling policy is not based on a fixed and static queueing system. This means that which job to be released next or which station to be selected for servicing a job would depend on the current job/station being serviced/used. The scheduling scheme is not designed in such a way that each job or machine has been assigned with a fixed priority status and must be put in front of the queueing system once its turn has come. Again, the concept of Brownian motion is appreciated here.
Scheduling problems arise in Flexible Manufacturing Systems (FMS), which are networks of automated multipurpose machines connected by a computer controlled material handling system. Each machine has an automatic tool exchange device that allows the setup time between consecutive machine operations to be almost eliminated. Ideally, these new FMS should achieve the cost efficiency of large volume manufacturing (ie. assembly lines) and the flexibility of job shops. To achieve this efficiency and flexibility, an effective scheduling system is required that controls the flow of jobs through the FMS.
Wein[12] viewed an FMS as a network of queues, and, hence, to mathematically model an FMS scheduling problem as a problem of controlling the flow in a queueing network. Three types of FMS scheduling decisions were considered : sequencing, input control, and routing. The sequencing decisions consist of dynamically choosing which job queued at a particular machine should be processed next. The input decisions allow for some control over the injection of jobs into the FMS. Such decisions may include when to release the next job and which job to release. In the case where an operation for a particular job may be performed at any of several different machines, the routing decisions consist of dynamically choosing at which machine the operation should be performed.
A framework was provided in which all three of these decisions can be made simultaneously. The model used to develop this framework is a Brownian network. A Brownian network approximates a multiclass queueing network with dynamic scheduling capability, if the total load imposed on each station in the queueing network is approximately equal to that station's capacity. Hence, a dynamic scheduling problem for a queueing network could be approximated by a dynamic control problem for a Brownian network.
Under heavy traffic conditions, all routing decisions are made according to the shortest expected delay routing (abbreviated by SDR) policy. The SDR policy routes a customer to the queue where it will incur the smallest expected delay(time in queue and in service). The Brownian network is used to model this SDR policy. Under stringent analysis of the SDR policy applied to more common real life examples, where a moderate amount of discretionary routing is allowed, the SDR appears to be a very robust and effective routing policy for manufacturing systems that have some degree of flexibility. Furthermore, SDR allows for the effective decomposition of the combined routing, sequencing, and input control problem.
The application of Brownian theory to queueing theory is an actively researched topic, which is still ongoing at this present moment. The optimum scheduling policy is yet to be formulated, and this applies not only in the field of manufacturing, but also in communication networks as well.
There had been previous investigations into optimal switching problems, but none of them provided any rigorous mathematical proof that an optimal starting and stopping strategy exists. The work done by Brekke and Oksendal modeled the price of a resource as one following a geometric Brownian motion, and they subsequently proved explicitly that an optimal starting and stopping strategy does exist for the particular resource extraction.
As an example of their research, suppose the costs of opening, running and closing down a field for resource extraction are known, and the price of the resource under consideration is varying according to a geometric Brownian motion. When would be the optimal time to open the field and to close it again? It would be reasonable to say that if the field is open, it may be a good strategy to continue extraction for a while even if the price has gone below running costs, because there may be a chance that prices could go up again. Furthermore, opening and closing the field is a costly process. On the other hand, even with such an optimistic point of view there is clearly a limit as to how low the prices can go before closing is the optimal strategy. Thus, using the mathematics of Brownian motion, they were able to prove explicitly that an optimal solution exists for the problem, and also for other similar situations.
In the decision process examined, performance is observed and rated over time and an observed score is compared with a predetermined standard. Action is taken when a threshold criterion is reached. For example, an employer promotes an employee when his observed average performance level is above 8 on a scale of 1 to 10. In the investigation, promotion decisions are determined by assessment of the quality of the match between the individual and the job, random factors involved in assessing the match, and the employer's personnel policy based on the costs of making errors in judgement. When applied to data, the estimated parameter values of the model indicate the relative importance of each of these factors in determining the time to promotion.
The model developed uses a Brownian mathematical formulation for a decision process used for sampling performance. One advantage of the modeling framework developed is that the resulting probability distribution functions can be expressed analytically rather than by simulation. The threshold process is formulated in continuous state space and continuous time, which makes it possible to model a cumulative record of performance as a Brownian motion process.
With respect to careers, the model reflects some ordinary notions about why people change jobs. The explanation offered supposes that career decisions are based on an evaluative record of the quality of the job match. Three factors are key in determining job mobility decisions: the assessment of the quality of the job match; the noise, or random factors, that influence evaluation; and the decision maker's skepticism about his estimate of the quality (of the job).
The model also assumes continuous observation of behaviour (of employees) and that the only route for leaving a job is by promotion. This suggests that the important mechanisms in the process are the basic evaluation procedure -- rating which includes a random component (Brownian motion theory), and the decision rule -- promote when an estimated average reaches a criterion level. The model was able to provide substantive qualitative results and hence is of good use to the 'real' world in decision making policies.
The deposition of larger particles(of diameter greater than 0.5 micrometer) mainly depends on the particle inertia, i.e low-speed particles are more likely to be filtered. Nevertheless, owing to the large surface area in the head passageways, the complex flow patterns(passage geometry) and the presence of strong Brownian motion, diffusion becomes responsible for the deposition efficiency of ultrafine particles in the nose and mouth. The filtration efficiency was found to be much higher for smaller aerosol particles. This explains why ultrafine pollutants in the air could be filtered out more effectively in head passageways and cause less irritation.
The Brownian motion theory has come a long way since its humble beginnings in the nineteenth century, and there now exists a large number of applications that have evolved from it and countless others that revolve around it. This theory covers such a vast number of interesting aspects of life without our being aware of its role. The examples we have cited are a mere speck of the research that has been done to date. With the random and often unpredictable nature of events that take place in this world of ours, it is no wonder that researchers have yet to find the perfect solutions to their unending problems. Hence, Brownian motion will remain a strong research area in the coming days, and it is certainly not going to become obsolete in the scientific world, where new technologies are constantly being developed to replace the old ones.
Internet Search Engines: