How to cite: M. Jagadesh Kumar, “Telling Lies to Describe Truth: Do we Emphasize the Importance of “The Art of Approximations” to the Students?”, * IETE Technical Review,* Vol.31 (1), pp.115-117, January-February

**2014**

**.**

Every day, I am faced with the dilemma of explaining some complex phenomena or the other to my students. In a given time that I spend with my students in the class, how do I make them understand a difficult topic? To realize my goal, I tell “lies to students”, a phrase I coined after reading fantasy writer Terry Prachett’s “lies to children”. A lie, according to Prachett, is “a statement that is false, but which nevertheless leads the child’s mind towards a more accurate explanation”. We use “lies” as tools if we aim to be effective teachers in conveying a complex subject in an understandable language. Like all teachers, I am honest too. Therefore, I actually say something like this: “Look, I am going to tell this lie to make you understand better”. Tomorrow if my students have to shape into good engineers or scientists, they need to master this “art of telling lies” because everything in science is an approximation to reality or “a lie”. If you disagree with what I am saying, perhaps you would find comfort in what Richard Levins said, “Our truth is the intersection of independent lies”. He further says “Even the most flexible models have artificial assumptions. There is always room for doubt as to whether a result depends on the essentials of a model or on the details of the simplifying assumptions.”

Approximations are an essential part of scientific pursuits. When I use approximations to explain a concept, my students are often perplexed. They think I am trivializing a profound concept. After making approximations, often unrealistic or unphysical, a lengthy and complex equation reduces to a simple form and gives us a greater insight about the working of a system. I could then see a smile on the faces of the students. Bertrand Russell said in “The Scientific Outlook” (1931), “Although this may seem a paradox, all exact science is dominated by the idea of approximation. When a man tells you that he knows the exact truth about anything, you are safe in inferring that he is an inexact man.”

We habitually use approximations during our lectures. We often do not, however, emphatically underline the fact that many of the laws or theories we study are actually mere approximations. We tend to pretend during lectures that the laws we are presenting are truths. This is because, as humans, we all suffer from a syndrome called *confirmation bias* which forces us to seek ideas that fit our current views than critically think and contradict with what we hold as truth. Take, for example, Ohm’s law or Henry’s law. How often did we tell our students that they are in fact not “laws” at all and that they are simply approximations to experimental observations. When a student asked me to add my perspective on approximations, I almost withdrew. I did not know how to approach this subject of approximations which seemed too complex to me. Approximations do hold the key to the evolution of modern knowledge. But how can I make such an assertion to my students? We seemingly want to use approximations everywhere but not think about how approximations have influenced the thinking of generations of scientists.

The basic underlying principles of any complicated aspect of nature can be obtained only if we make appropriate approximations. The evolution of science is often driven by measurement uncertainties, approximations, estimates, unphysical assumptions and often well thought out speculations. Einstein is clear about it when he said, *“**As far as the laws of mathematics refer to reality, they are not certain, and as far as they are certain, they do not refer to reality.” *Approximations help us in bridging the gap between what is not certain and what is reality. As practitioners of science, we know their importance. But are we sensitizing our students enough about it? I think it is not adequate just to “derive” a formula or a theory but it is also essential to discuss the underlying philosophy of the approximations we make. Otherwise, students may often think that the approximations do not make “sense”. They may not even appreciate the importance of acquiring the ability to make approximations.

When you are faced with a complex situation, the solution looks intractable. We need to break down the complexity into a simpler form. How can we do that without approximations? To make this point clear, let us look at some outstanding “approximations” that influenced the future of science.

For a hardened science practitioner, making approximations may sound unremarkable but not to the student. Simply knowing or being told that pi has an approximate value is of little relevance unless the student is made to appreciate how the idea of finding an approximation to pi actually led to new knowledge. Archimedes’s persistence in using the *method of exhaustion* to find a better approximation to Pi ultimately paved the way to a new area of mathematics called integral calculus. But often we fail to emphasize this connection leaving the student a chance to appreciate why approximations are important.

Since the time Copernicus pointed out that the Sun is the center of the solar system, everyone believed that planets moved in circular orbits at a constant speed. Kepler did not like this idea since it was not fitting the best available data of that time for planet Mar’s orbit. After many frustrating efforts, Kepler found that elongated circles or ellipses fitted the data nearly well. He knew that an ellipse is an approximation to a circle when the foci are brought closer. He simply kept the Sun at one of the foci of the ellipse. This led to Kepler’s first law. He, however, was not satisfied with the results leading him to postulate that the speed of the planet changed as it approached the Sun. This became his second law. His effort was an unthinkable scientific feat. Kepler’s work, published in the year 1609, is a historical example of how one can use limited data but make inspired guesses using appropriate approximations.

We need to provide students the appreciation of the unity of approximation and innovation. It was Edward Lorenz, an MIT professor, who proved that the weather cannot be predicted with any reasonable accuracy beyond about two weeks. His mathematical models, consisting of just three variables and three equations, indicated that there are formal predictability limits for certain deterministic systems. Lorenz found this accidentally. This brilliant insight that we have to accept approximate predictions when dealing with large dynamic systems, such as the atmosphere led to a new scientific field called the chaos theory. After relativity and quantum mechanics, the chaos theory is considered to be a third scientific revolution of the 20th century. Lorenz will be remembered for making us aware that weather forecasting will always remain approximate. The Kyoto Prize committee noted that Lorenz “has brought about one of the most dramatic changes in mankind’s view of nature since Sir Isaac Newton”.

Factorials are used very commonly in algebra, calculus, probability theory and number theory. Using successive multiplication, one can calculate the factorial of any non-negative integer *n*. But from a computing point of view, that is an utterly inefficient way of finding the factorial if *n* is large. Large factorials can easily be computed using approximations such as Stirling’s approximation. If you are not a mathematician, you may not be aware that Srinvasa Ramanujan’s approximation to factorial *n* is even more accurate than Stirling’s approximation as the value of *n* increases. Ramanujan is well-known for many other approximations in number theory and his approach to obtain these approximations is unparalleled in the history of mathematics since the time we began our efforts to approximate the value of pi about 4000 years ago.

Sometimes scientists abandon well respected old ideas and theories only to find themselves amidst new revelations. Perturbation theory is a great mathematical approach which helps us find approximate solutions to problems to which it may be impossible to find exact solutions. This theory has its origins in a concept called ‘linear approximation’ in use since 17th century in physics. Hooke’s law is a famous example of linear approximation. Let me explain it in simple words.

Hooke’s law, discovered by Robert Hook in 1660, states that “the force *F* a spring experiences is proportional to the distance *x* it is deformed from its natural length *L*”. Hooke’s law is expressed as *F* = -k*x* where k is a constant. There cannot be a simpler equation to describe the elastic behavior of a spring. However, it is clear that the spring cannot be compressed to zero thickness or it cannot be stretched to infinite lengths where it will break. Hooke’s law will simply fail at these extreme lengths. For a limited range of *x*, as long as the deformation is “small”, we see that Hooke’s law is a linear approximation and works quite effectively. Ohm’s law, equations for thermal expansion or pendulum movement all of which are linear approximations based on experimental observations. In the absence of a deeper appreciation of linear approximations, Raleigh and Schrodinger, could not have invented the perturbation theory without which there is no possibility of finding simple solutions to describe complex quantum systems.

When we want to describe reality, space and time remain as our fundamental conceptual aids. However, a recent approximation developed by scientists is expected to break this notion. Particles, spread all over the universe, interact constantly with each other and are the most basic events intrinsic to Nature. The laws that govern the particle interactions are described by the quantum field theory. The complexity of these equations, running into several thousands of terms, needed to capture the nature of elementary particles and their interactions is overwhelming. But all that is going to change dramatically with the discovery of a jewel like geometric object, called “amplituhedron” by Nima Arkani-Hamed of Institute of Advanced Studies, Princeton, N.J. Finding the volume of this amplituhedron, Arkani-Hamed’s team has demonstrated that the complex equations of quantum field theory can be reduced to an equivalent simple expression. With this advancement, the researchers claim, one can make the quantum field theory computations even on the back of an envelope without the need for advanced computers.

Mathematical description of gravity at the quantum scale is an uphill-task due to the possibility of sudden surprises involving inexplicable paradoxes and infinities. The physics world is now abuzz with the likelihood of developing a simple unified theory of quantum gravity (to combine the large and small-scale understanding of the universe) using similar geometrical approximations. This area of study is evolving and not all scientists are comfortable with the idea that we can throw away the notions of time and space and use only simple geometrical approximations. However, even in the 21st century, approximations continue to be a “hot” topic and draw fiery debates among scientists because they could shatter our rigid deep rooted beliefs of reality and present mother Nature in a more comprehensible and simple form.

Let us rejoice in the centuries old practice of “approximations”. Let us exhort our students to look for opportunities to approximate. If you can underline “approximations” as an important tool of scientific pursuit to be mastered and make students pay serious attention and not simply stare over them, we shall have accomplished a valuable and significant task as professors. Einstein once said about God, ““I want to know His thoughts. The rest is just details.” For students it would be more fascinating to know how scientists worked toward developing a scientific idea using approximations than knowing the idea itself. But for this to happen, we have to get away from being routine teachers and instead work harder to become effective teachers, a species fast becoming rarer than the hen’s teeth.

Approximations are the lifeline to solve the most complicated aspects of nature. Some professors do emphasize the philosophical and historical aspects of approximations in their lectures. Why not every professor? Who knows if one of our students enthused by our emphasis on the importance of approximations would turn out to be a future Ramanujan or Kepler or Schrodinger? Don’t you agree?

PS: “Telling lies to describe truth” in my title is used only to catch your attention. I do not certainly mean approximations are “lies”.

Very well articulated. It always confused me as a youngster, how Nature conjured up such simple formulae for all the physics and chemistry that we did. What was the divine connection between maths and physical sciences ? It was much later, that I realised that we were being “fooled” and actually hated my teachers for not being transparent. Now, I am extra skeptical whenever I see long complicated calculations to explain anything, especially astrophysicists. Not only are the explanations dubious, I also think that many of the observations are also imaginations.

Very well written Jagdesh. In fact the explanation of limitation should start at school stage itself when the students get the first exposure to these laws. I have myself been talking about this in lectures to Science teachers in schools. I always give example of Boyle’s law PV = constant, provided we are using it fpor a fixed mass of gas and the temperature is constant. Then I move to a bigger generalization, the ideal gas equation, PV = nRT. Then I discuss that even this is valid only for gases and that too ideal gases etc.

In one of P G Wodehouses books a character declares loftily “I have always found truth an excellent thing to deviate from”. A colleague of mine jokingly told me that all physical sciences and engineering are merely curve-fitting exercises. Approximations therefore are “lies” indeed unless their deviations from the truth can be bounded somehow

Approximations are core aspect of understanding the nature – as it cannot be understood perfectly!

Question is “how much is the error involved?”

If we can convey, at least heuristically that, then approximation methods are as legitimate method, sometimes more legitimate, than the exact ones – as exact one is impossible since every model itself is approximate of the reality!

Really enjoyed it — could correlate it very well with the

“approximations” that I use to answer many questions that my 6 year

old asks.

Cheers

==k

Mathematics is the only field where things are precise – crisp and not

fuzzy. Nothing is approximation. Approximation can only occur in finding

numerical values to equations that cannot be solved using standard

methods.

The only “approximation”used in mathematics is – the ‘Çontinuum

Hypothesis’. Godels theory says that the logical principles of mathematics

hold both with or without this principle. Logicians are very clear when

they derive a certain theory with CH or without CH.

As expected, this is one more excellent exposition on a subject close to my heart. “Approximation of Facts”, “Economy with Truth”, “Metaphor” and even “myths” are similes for lies. In fact “mythology” derives from “myth” and thus ” lies” assume moral sanction and one need not be apologetic about liberal use of “lies”. In the ultimate analysis, “Truth” is what the victor proclaims of the “Vanquished”.

I enjoyed your article very much. I think in the physical sciences, a

hierarchy of approximations often quantifies the relative importance of

different apects of a phenomenon. I think the contrast between “truth”

and “lies” may be too extreme-it is more successive approximations to

the truth- or rather the exact solution of a mathematical equation that

expresses what we take to be a “law of nature”. It is important when

teaching to indicate the level of approximation and when one might expect

it to break down.

Thanks for a lovely article. In technology (technology is science based as opposed to engineering that may be more judgmental and experience based) approximations are well accepted. The objective is to make a system (manmade) work safely and satisfactorily. As long as the assumptions do not make the system compromise with required reliability (may have sufficient redundancies); approximations are perfectly O.K. For example consider a building which is generally built up of members joined together. The real fixity of the joint is extremely complex to account for in mathematical treatment. So you assume (approximate it) a given type of joint say fixed, pinned, hinged etc., so that it becomes amenable to mathematical treatment with current knowledge of mechanics (classical mechanics and solid mechanics at macro level) and you can estimate the forces the connected members are going to experience against natural forces. So long your estimated force is adequately less(with a partial factor of safety) than the expected (with 95% statistical confidence) actual force the member is going experience during its intended design life; your approximations are all well accepted. In fact you have to make assumptions leading to approximations if you are dealing more with natural systems such as soils, rivers and earthquakes, etc.

Working with natural fibres for the last decade, very well appreciate the article. I guess approximations are a curious way of understanding and appreciating nature.. Nothing is really concrete – even our understanding,. (which so unerringly seems convincing at that juncture). Nature delicately weaves her intricacies – our best comprehensions are perhaps somewhere close to the “simplest approximations” of the much complicated fabric which exists.. To give an example, though we have somewhat understood the structure of a cotton fibre, we really have no clues as to how this fibre is formed from the cellulose polymer- till date.. hypotheses are proposed,which are but “oversimplified assumptions”. On more complicated systems like wool, scientists have even not dared such assumptions…

Thats a very important issue in science and engineering and should be given more priority in the courses we teach. A related problem is that of ‘ordering’ and comes up in perturbation theory that is used to solve nonlinear differential equations. We often do Taylor series expansion of complex mathematical functions and ‘approximately’ keep the first few terms to make life easy. But if there are more than one functions (or one function of several variables) that need to be Taylor expanded, it may not be so obvious to see which of these nth order terms is the smallest. And this is where ‘ordering’ comes in. A different choice for ordering can lead to a very different solution which can be completely wrong. Whats more annoying is that the only way to know that a certain choice of ordering is wrong is to either numerically solve the ODE or to find a particular example which can be exactly solved and then compare.

As it has been pointed out by several faculty members earlier, we always teach students ‘what to think’ but hardly teach them ‘how to think’. I hope that when the curriculum review happens next time, we give more importance to these issues and come up with some innovative ways of teaching.

Thanks,

Kushal.

In Prof. Jagadesh Kumar’s article, the concept of ‘approximation’ is used in two senses: As numerical approximations to the exact solution of an equation and as scientific theories as being mere approximations to the real physical phenomena. Genuine questions about the realism of scientific theories can be raised but branding them as LIES is not justified. One does not board a LIE when taking an intercontinental flight. Broadly speaking, in my opinion, this discussion is about ‘Philosophy/Methodology of Science’ by other means. All of us as researchers and teachers in engineering, science and mathematics need some working philosophy. In my experience, those who ridicule philosophy as irrelevant defend most vociferously their stand on certain philosophical issues having a bearing on their research field. Some of us subject our nice research scholars and colleagues to lengthy monologues repeatedly about these issues and warn them about the ‘enemies’ having opposite view. This is the first time in decades that I have come across any public discussion about this issue. Gratitude is due on this score to Prof. Jagadesh Kumar for initiating this discussion.

All theories in engineering and physics are mathematical. Theorems are deduced by mathematical deduction from explicitly-stated hypotheses about relations between concepts. However, proposal of new concepts and hypotheses depends upon the scientific creativity of the theorist. In this not-entirely-rational creative phase, several factors like experience, knowledge about existing theories and experimental data, induction, mathematical competence, personal taste and style, etc., have a role to play. Questions about realism, empirical connection and applicability of scientific theories naturally arise. To some of us the concept ‘exact’ implies consistency of mathematical deduction without approximations, while others mean by it the correspondence with real phenomenon being modelled. For some, even an equation obtained by curve-fitting of experimental data is a theory.

Diversity in opinion about these issues is caused by our varied experience, level of development of our research fields, the motivating literature on this subject, etc. My views are coloured by my research field (constitutive modelling, nonlinear dynamics and stability) and the following literature: Truesdell’s An Idiot’s Fugitive Essays on Science, Karl Popper’s Hypothetico-Deductive Method of Science, Wigner’s Essay on the Inexplicable Applicability of Mathematics in Physics, Hadamard’s Essaay on the Psychology of Invention in the Mathematical Field, Ramanujan, Einstein, other sundry readings on history and philosophy of science, history of continuum mechanics, elasticity, structural theory, etc.

Seismic behaviour of civil structures is affected by many factors and the seismic design is code-based. As of now, research investigations follow a design-oriented all-inclusive empirical-computational methodology. In some areas, there is some analytical clarity, while in others sheer pragmatic eclecticism reigns supreme. All kinds of theories with many empirical coefficients are used to obtain some response without bothering about the uniqueness of response of these nonlinear hysteretic mechanical systems. Sheer volume of possible mundane consultancy definitely has a disorienting effect on research trends. In such a scenario, who is worried about the multiplicity of competing theories, admissibility of some concepts or methods, etc.?

Of course, somebody with a formal training in Philosophy of Science could brand this discussion entirely uninformed. So be it.

To emphasise, we should be engaged in dicussion of different viewpoints on the relation between the mathematical, physical and engineering sciences, and the experimental observations and real phenomena. After all, experiments are also conducted under the guidance of the theory being validated.

Any scientific theory is a model. Any model is based on a set of postulates. Postulates are based on the observation of natural phenomena and their perception by the human mind. Since we are part of the system called Nature, our perception cannot be complete because a part of a system cannot describe the system fully. Hence any description of nature is and will always be approximate with varying degrees of approximation.

Therefore, there is nothing like telling lies for the description of nature. It is simply telling the truth of our inability to describe nature in its full glory.

Regards.

Ajit Kumar

In this jovial ongoing discussion on Modelling and Approximations. I would like to share an old anecdote. Thought of sharing it, because it deals with “Modeling and Approximations”

3 people were once asked to prove the following theorem.

” All odd numbers are prime numbers”

The way they proved it

Mathematician’s Proof:

3 is a prime number

5 is a prime number

7 is also a prime number

The result follows by Induction.

Hence proved

Physicist’s Proof:

3 is prime number

5 is prime number

7 is prime number

9 – experimental error

11 is prime number

13 is prime number

Hence proved

.

.

.

.

.

.

Of course Engineer has the last laugh(look at the way engineers work!)

Engineer’s Proof

3 is prime number

5 is prime number

7 is prime number

9 is prime number

11 is prime number

etc..

Hence proved

Moral:

The model of anything you derive is based on your attitude towards approximations.

The above subject topic initiated by Jagadesh has really taken a new dimension and I wish to add something in the same vein.

*************

Narration 1:

Everything cannot be a subject of research: Visitors at his home in England surrounded a humourist when he was counting his last breath. Thinking that he might now fearlessly reply all questions, a journalist among those many visitors put the question to the humourist:

Why have you never made fun of the King?

The others standing around him, thought the question was irrelevant as he was dying and had no time to write anything.

The master laughter-blaster left his last words

“ King is no subject”.

Murmuring this, now a famous four-worded pun instead of 4lettered, the KING of humour left his mundane life (pun is a kind of spin if one understands, it becomes fun).

*****************

Narration 2:

Logic is most important: Let me take you to a discussion between an aspirant and a priest in the lawns of a Seminary surrounded by farms where lots of cows and other farm animal are grazing and birds are flying all over in the sky to produce a beautiful sight. The discussion is about God and its creations.

Aspirant to Priest: A bird eats so less and still it flies long distances – why – in search of food. Whereas a cow needs to eat a lot to fill its belly and searches fodder all day long but that too in a small farm! Tell me priest, why god made the bird with wings to fly and not the cow?

Priest to Aspirant: God is the creator and all on the earth are its creations to make the world most beautiful, and supported his argument with possible examples and explanations.

The priest tried to explain in all earnest the decision of the Almighty, but the aspirant was insistent that he was not convinced and maintained “cows should have wings and not birds”. And the discussion continued in the open, which made the priest pensive who stood with bowed head looking down in search for newer convincing answers based on reasoning. Right at that moment something happened.

An overhead flying bird eased and the bird dropping fell on the shoulder of the aspirant who suddenly became silent on noticing it. The priest raised his head to find the aspirant bit vexed and confused. At that very moment, the priest in a very deep voice asked the aspirant,

“You still want the cow to have wings and fly”.

******************

Narration 3

Observations are necessary: I may also recount the story of “soliton” in this connection which was first observed but that finally gave birth to an elegant theory being used across many disciplines in science and engineering. John Scott Russell first observed the soliton in the Union Canal as a rounded, smooth, and well-defined heap of water down the stream. On seeing it, Russell chased the wave on a horseback along the canal and found that it did not dissipate but continued travelling at the speed of 13-14 km per hour downstream. He finally observed how wave dissipated when the canal started winding and ultimately its disappearance. He later reproduced the “wave of transition” in a wave tank. Now, it is well known that soliton is a wave packet; and solitons are solutions of weakly non-linear partial differential equations. Different sized solitons interact and move with different speeds; on interaction there is no change in the shape but their phase speed is bit altered. The Jupiter’s red spot has been explained with this theory.

Replicating an observed event in a laboratory produces most profound understanding of a particular phenomenon. Formulate the governing equations that could explain the valuable collection of observation from experiments.

******************

Narration 4

Mathematics is the soul in science and engineering: The matter for this narration has been taken from the book “Science and Method (190” by Henri Poincaré.

During the Middle Ages, Europeans believed strongly in magic. For all kinds of disasters or natural calamities that happened, people far clever than most gave magical explanations or attributed them to the visits of monsters from nearby Africa or unknown destinations: be it cyclone or even earthquakes. Their influence was so strong that almost all Europeans believed in the power of magic but at the same time they also thought it were sinful. Possibly it was the Dark Age for the humanity with so much of belief in the power of magic. A similar adverse period for science could be related to trial of Galileo, the 16th-17th century mathematician, physicist, astronomer and philosopher.

Nevertheless Science continued to evolve in Europe. Let us understand how magical explanations so dominant in 11th and 12th century, were eradicated from Europe. Awakenings happen at moments when darkness reigns and a single enlightened soul will take the charge to change the perception and beliefs of people creating a new paradigm.

One such savant, philosopher Mach, lived in Vienna with the discomfort of people’s blind faith in magic, but he vowed to create a spark of light in the challenging dark woods of ignorance. The celebrated Viennese philosopher Mach of 11th – 12th century wanted the people who lived on common ground to get them rid of this sinful thinking defeating the people far clever than most. So he started his “conventions”. The sole aim of philosopher Mach was to propagate Science and beliefs in scientific instead of magical explanations of the natural events. For achieving his objective, he used the simple fact – the power of lever – to build the arsenal to kill the biggest monster of ignorance. The power of lever was already exploited by people in their daily life, and its inventor Archimedes even claimed “Give me a place to stand, and a lever long enough, and I will move the world”, a couple of centuries before the birth of Lord Christ.

Why did philosopher Mach choose the example of lever? Because he realised that common people understand clearly the efforts that go in manipulating a lever and the enormous return they get. So he preached at his conventions:

“Science economizes Thought much like Lever economizes Labour.”

This made enormous impact on the people, and they flocked at his every convention seeking for Tamso ma jyotirgamay − take me from darkness to enlightenment. The knowledge and beliefs in scientific explanations began to surge and a new paradigm was created. The magic though was not completely eradicated as it well known that even Newton studied “Alchemy”. Later in 16th century Galileo faced the wrath of the Church and was summoned for trial; science progressed in Europe notwithstanding.

Henri Poincaré (1854-1912) was a mathematician, physicist, engineer, geometer, philosopher, and the man of letters. Such an individual is indeed rare. Poincaré achieved prominence as a mathematician; he worked in astronomy, light, electricity, mechanics, thermodynamics, and fluid mechanics. He studied at Ecole Polytechnique (the most reputed engineering institute founded by Napoleon) so he had a degree in engineering. He is also credited with Einstein and Lorentz as the co-discoverer of the theory of special relativity. Poincaré was gifted with such superb literary prowess that he used it to advantage in describing the meaning and importance of science and mathematics to the general public.

To the notable quote of philosopher Mach’s that people of his time listened at his conventions, Poincaré added just 3 more words to turn the sentence much more reflective on the union of two different universe, viz., of science and mathematics:

“Science economizes Thought much like Lever economizes Labour.

Mathematics economizes Science.”

Though this sentence is so deep in meaning yet for everyone it is facile to feel and understand the explicit significance of mathematics which has contributed so much to the advances of science and engineering.

OP Sharma

This is an interesting discussion thread although the initial posting by Jagadesh was about the significance or awareness of “model/approximation” in the process of teaching.

The seriousness or importance of a certain discipline is often related to the level of rigour and scrutiny that the current models can withstand and how honestly a community goes about verifying it. As long as we are honest there is no issue, since it is a natural process of evolution and refinement. Computational verification has added a new dimension to this process but I feel that it is probably being overused (abused ?).

Even mathematics went through a period of reinforcing itself in the earlier part of 20th century where the logical foundations were laid down explicitly.

We will only be as good as the models are and let the philosophers devise theories of the models powered by different brands of spirits and spirituality.

regards,

Sandeep

There is an interesting lecture on the art of approximations by MIT researcher Stephen Hou. Details are given below.

The Art of Approximation in Science and Engineering: How to Whip Out Answers Quickly

http://blossoms.mit.edu/videos/lessons/art_approximation_science_and_engineering_how_whip_out_answers_quickly

Stephen M. Hou

Department of Electrical Engineering and Computer Science

Massachusetts Institute of Technology

Cambridge, Massachusetts 02139 USA

There is a selected topics physics course at Massachusetts Institute of Technology. Its title is “Lies and Damn Lies: The Art of Approximation in Science”. This course is all about how one can enjoy science by mastering skillful lying i.e. the art of approximation. More details are available at http://www.inference.phy.cam.ac.uk/sanjoy/mit/

Well done Professor. Makes a lot of things much clearer.