Call drops in mobile wireless networks: Calling your attention

(To appear in Sept-Oct 2015 issue of IETE Technical Review)

After intense debates over net neutrality, the issue of call drops in mobile networks has now become a new burning topic in public discourse. Whether you have called another user or someone else has called you on your mobile phone, if the call is interrupted before the users decide to terminate the call, it is known as a call drop.  A dropped call directly affects the quality of service (QoS) that is expected to be maintained in a mobile wireless network.  Call drops are more worrisome than blocked calls since a call drop is no longer in the queue to re-establish the connection. The call drop rate (CDR) is, therefore, used as one of the essential figures of merit to measure the quality of service in mobile wireless networks.

Call drops are rare in fixed line networks. For most users, since the mobile wireless network is only an extension of the fixed line network, they expect the same quality of performance in mobile networks as in fixed line networks.  In future, this demand for improved quality of service will further increase as most people’s lives will revolve around services provided by the mobile wireless networks.

Every service provider in India furnishes, among other quality parameters, the monthly call drop data to the Telecom Regulatory Authority of India (TRAI). Monthly data from most of these service providers indicated that the call drop rate is less than 2%, a limit prescribed by TRAI. If the call drop rate is less than 2% as claimed by the service providers, such a low call drop rate should not significantly affect the user experience. However, a nationwide outcry on frequent call drop complaints prompted TRAI to carry out their own measurements in two big cities – Mumbai and Delhi. To every one’s surprise, the data collected by TRAI showed a different picture – the call drop rates varied from 0.84 % – 17.29 % in Delhi and 0.97 % to 5.56 % in Mumbai among different service providers. Except for one out of six, all the five service providers crossed the 2% limit, some very significantly.  A similar situation could exist in other parts of the country.

In addition to the inconvenience caused to the user, the call drops will also lead to additional charges to the user since the user will attempt to call again to continue the conversation. Since nearly 41 % of the mobile users in India pay the charges on a per-minute-pulse rate, call drops will lead to an unnecessary financial burden to the user.  In per-minute-pulse scheme, the users have to pay even if they establish the connection for one or more seconds before the call drops. The user, therefore, is the sufferer. Even for the remaining 59% of the users, who use the pay-per-second scheme, it is not that call drops will only cause inconvenience but it will also lead to a monetary loss to the user since the user will spend more time during the follow-up call to compensate for the interrupted conversation. Such call drops can seriously undermine productivity and efficiency in professional dealings.

Call drops can typically be avoided if service providers take some measures such as optimal balancing of traffic among the different frequency layers, minimizing interference and congestion and maximizing the service area. Therefore, within the available spectrum, the quality of service in mobile networks can only be increased by improving the network infrastructure and by deploying technological solutions to minimize the call drops. However, TRAI has pointed out that while the minutes of usage by mobile users has grown by 6.8% during the last couple of years, the investments made by the service providers has only increased by 4.6 % during the same period barring the investments made to purchase the spectrum. This mismatch between usage growth and investments in network infrastructure needs to be bridged since the quality of service problem will boomerang in future as the mobile network user base increases.

While the service providers, hopefully, make efforts to improve their infrastructure, can something be done to help the consumers?

In a recent consultative paper, TRAI has posed two important questions, among others, to all the stake holders – consumers and the service providers.

  • Do you agree that calling consumers should not be charged for a call that got dropped within five seconds? In addition, if the call gets dropped any time after five seconds, the last pulse of the call (minute/second) which got dropped, should not be charged. Please support your viewpoint with reasons along with the methodologies for implementation.
  • Do you agree that calling consumer should also be compensated for call drops by the access service providers? If yes, which of the following methods would be appropriate for compensating the consumers upon call drop:
  1. Credit of talk-time in minutes/ seconds
  2. Credit of talk-time in monetary terms
  3. Any other method you may like to suggest

All of us should proactively come forward and provide our views and suggestions to help the regulator to form a policy so that consumer satisfaction is restored without any delay.


  2. Consultation Paper on Compensation to the Consumers in the Event of Dropped Calls.

DSC_0013aMamidala Jagadesh Kumar is a Professor of Electrical Engineering at the Indian Institute of Technology, New Delhi, India. He is the Editor-in-Chief of IETE Technical Review and an Editor of IEEE Transactions on Electron Devices. He has widely published in the area of Micro/Nanoelectronics and is known for his excellence in teaching.  He is a member (PT), Telecom Regulatory Authority of India (TRAI). More details about Dr. Kumar can be found at

Posted in Education and Research | 6 Comments

Smart cities with massive data centric living are hard to build without 5G networks

(To appear as an Editorial in IETE Technical Review, July/August 2015 issue.)

Today’s mobile devices (for example – smartphones, tablets, smart watches, smart bracelets, smart glasses etc.,) have a number of sensors embedded in them such as an accelerometer, compass, gyroscope, proximity sensor, and ambient light sensor, GPS sensor, barometer, step detector, step counter apart from the usual camera and the microphone. These sensors enable us easy access to motion data, ambient light strength, audio records, and the image of an ambient environment and so on, making our lives more comfortable and smart [1]. As technology advances more sensors will be integrated into these mobile devices. However, further innovations can be slowed down by two problems: (i) the computing and storage limitations of the mobile devices and (ii) the cellular network constraints.

Let us understand the first problem first. Since mobile devices have limited memory and computing resources, it is better to transfer these tasks to a cloud computing environment through a network interface [2]. After performing the computationally intensive tasks, the cloud server can return the information back to the mobile device. This is a clever idea since it will also save the mobile device from draining out its battery since the mobile device does not have to do the computationally demanding tasks.  If a large number of people use these mobile devices, which will be the case in a few years, the amount of data the cloud servers need to handle can become enormous. The technique that comes to our rescue in analysing and processing such huge data, which has variety, velocity and volume, is called Big data analysis. Future mobile devices, therefore, have to constantly keep talking with the cloud servers to transfer the data generated through their sensors and  this data need to be processed and analysed using Big data analysis [3].

Let us look at the second problem. This big data from mobile devices has to travel back and forth between the mobile device and the cloud using several network options available to us such as WiFi, cellular, or other network interfaces. Network bandwidth will, therefore, become the biggest challenge to be overcome. Network congestion in the existing network technologies (3G/4G) and the resulting drop in data transfer speeds has already become a major concern.  We, therefore, require a different kind of network infrastructure which can support this massive data and its propagation without any latency.

There is an additional trouble. Apart from the data generated by the mobile devices, the next source of enormous data will come from the smart cities and smart homes that we wish to build [4]. In a smart city, we need to provide e-governance which is expected to be not only easily accessible but also transparent and fast. But that is only one aspect of being a smart city. Energy and water conservation, efficient waste disposal, city automation, seamless facilities to travel and affordable access to health management systems are essential parts of a smart city. Traffic and weather also need to be monitored intelligently. Smart cities should be able to respond quickly to emergencies. Smart cities will also house smart energy aware homes with an ability to intelligently monitor and control lighting, security, and metering of power usage and generation. All this will require deployment of a large number of wireless sensors and devices which will generate a massive data. Handling this unprecedented data from the sensors and the data transmission between the devices and the cloud server requires a disruptive wireless technology [5]. That is the reason why we need the fifth generation (5G) networks with speeds an order of magnitude larger than that of the existing networks. This is required to handle the data traffic volumes which are expected to increase by a thousand fold by 2020.  A smart and intelligent globe is not possible without the 5G networks. Let us understand briefly about what makes the 5G networks achieve this objective of building an intelligent human society.

5G networks will have a huge band of spectrum, ranging from 30 GHz to 300 GHz since they use mmWave technologies. The wavelength of the signals in this technology is between 1 and 10 mm. This permits us to shift the wireless transmissions from the crowded spectral band of present generation wireless networks to a different band of spectrum [6]. However, waves in this band can easily be attenuated due to environmental factors. Therefore, mm waves are ideal for short-distance communications with the benefit of re-usage of frequencies in the mmWave band minimizing the problems of shortage in spectrum.

Another departure in 5G networks, compared to the conventional cellular networks, is the deployment of massive multiple-input–multi-output (MIMO) or ‘massive MIMO’ systems. Let us first understand the meaning of MIMO. When a radio wave bounces back and forth from walls, ceilings and other physical objects and reaches a single antenna at different times and angles, it can result in interference and affect the data transfer speed. However, in massive MIMO systems this property of radio waves is better utilized by using many smart antennas which act as multiple transmitters and receivers. This will enable us to transfer more data at the same time resulting in higher speeds [7]. The conventional MIMO becomes massive MIMO when several hundreds of antennas are deployed to serve tens of users simultaneously. A massive MIMO is upwardly scalable since by using a large number of antennas, throughput can be increased, radiated power can be reduced, and user experience in a given service area can be enhanced by a manifold, all using simple signal processing [8].

5G networks will employ new ways of carrying data. Even when the data becomes massive, users want faster data speeds. But this is related to the cell size, that is, the area covered by a cellular network. Depending on the area of coverage, cells are classified as Microcell (< 2 kilometres), Picocell (< 200 metres) and Femtocell (about 10 metres). One way to increase the data speed is to reduce the cell size so that cell capacity is shared by fewer users enabling them to transfer data at higher speeds. A future wireless network should be able to seamlessly interact with different cells and distributed antenna systems to enhance the cell coverage and delivery speeds. Such a network, termed as a heterogeneous network (HetNet), is the heart of 5G networks in which data transmission rates will vary widely (10 kbps to 10 Gbps), accepted delays can range from a fraction of a millisecond to a few seconds and online access requests could be from a few hundred to several million requests [9]. Unlike the conventional single tier wireless networks, the HetNet is, therefore, multi-tier in nature with an ability to function efficiently across multiple nodes with different transmit powers, coverage areas and radio access technologies (such as Bluetooth, Wi-Fi, 3G, and 4G or LTE ) [6].

While 5G networks provide ways to meet the future data volumes of a networked society, we do need to overcome an important challenge. When every device that can be connected to the internet is plugged to the 5G network, it leads to what is known as Internet-of-Things (IoT) [10]. If the devices in IoT are close to each other, the data traffic does not have to go through the base station reducing the burden on the cellular networks. Therefore, device to device (D2D) communication will be an essential part of IoT. However, as the devices connected to the wireless network may run into more than a 50 billion devices in near future and a majority of them may have to communicate not only among themselves but also with the cloud server, energy requirements will become a serious issue.  The 5G networks need to be run in a cost-effective and sustainable manner since their contribution to the global carbon dioxide (CO2) emissions cannot be allowed to increase from the present levels. [11]. In addition, if energy requirements are not contained, the user tariffs can increase and the costs for running the networks will become untenable to the network operators making the business less attractive. 5G networks, therefore, should be energy efficient by increasing the number of bits that can be transmitted for each joule of energy consumed. But this is easier said than done [12].

As 5G networks become ubiquitous in our lives in future, securing massive amounts of data, which could be confidential and very sensitive, from eavesdroppers is another challenge that needs to be addressed.  The designers of 5G networks need to provide unsurpassed security to the data that seesaws through these networks [6]. Otherwise, entire cities or security installations can be brought to a standstill.

As the human race embraces the massive data centric living, the unique features of 5G networks (use of small cells, device-to-device communication, exploiting the mmWave frequency spectrum with GHz bandwidth and off-loading the computational intensive operations to cloud servers) will make the 5G networks an inevitable choice of our future cellular communications systems. How quickly we can build smart cities consisting of smart homes and smart individuals is, therefore, closely tied to how quickly the 5G networks evolve and become cost effective. We may have to wait until the beginning of next decade for this dream to be realized. When governments promise to build smart cities, we need to be aware that it is a long path ahead.


  1. Q. Han, S. Liang, and H. Zhang, “”Mobile Cloud Sensing, Big Data, and 5G Networks Make an Intelligent and Smart World”, IEEE Network, pp.40-45, March/April 2015.
  2. N. Zhang, N. Cheng, A. T. Gamage, K. Zhang, J. W. Mark and X. Shen, “Cloud assisted HetNets toward 5G wireless networks”, IEEE Communications Magazine, vol.53, no.6, pp.59-65, June 2015.
  3. IEEE Bigdata home:
  4. IEEE Smart City home:
  5. X. Shen, “Device-to-Device Communication in 5G Cellular Networks”, IEEE Network, pp.1-3, March/April 2015.
  6. N. Yang, L. Wang, G. Geraci, M. Elkashlan, J. Yuan and M. Di Renzo, “Safeguarding 5G Wireless Communication Networks Using Physical Layer Security”, IEEE Communications Magazine, vol.53, no.4, pp.20-27, April 2015.
  8. T. L. Marzetta, “MassiveMIMO: An introduction”, Bell Labs Technical Journal, vol.20, pp.11-22, 2015.
  9. A. Adhikary, H. S. Dhillon and G. Caire, “Massive-MIMO Meets HetNet: Interference Coordination Through Spatial Blanking”, IEEE Journal of Selected Areas in Communications, vol.33, no.6, pp.1171-1186, June 2015.
  10. Y. Ghamri-Doudan, R. Minerva, J. Lee and Y. M. Jing,”Special Issue on World Forum on Internet-of-Things Conference 2014″, IEEE Internet of Things Journal, vol.2, no.3, pp.187-188, June 2015.
  11. M. Olsson, C. Cavdar, P. Frenger, S. Tombaz, D. Sabella and R. Jantti, “5GrEEn: Towards Green 5G Mobile Networks”, 1st International Workshop on GReen Optimized Wireless Networks (GROWN’13), 2013, pp.2012-2016.
  12. G. Wu, C. Yang, S. Li and G. Y. Li, “Recent advances in energy-efficient networks and their application in 5G systems”, IEEE Wireless Communications, vol.22, no.2, pp.145-151, April 2015.

DSC_0013aMamidala Jagadesh Kumar is a Professor of Electrical Engineering at the Indian Institute of Technology, New Delhi, India. He is the Editor-in-Chief of IETE Technical Review and an Editor of IEEE Transactions on Electron Devices. He has widely published in the area of Micro/Nanoelectronics and is known for his excellence in Teaching.  He is a member(PT) of Telecom Regulatory Authority of India. More details about Dr. Kumar can be found at

Posted in Education and Research | Leave a comment

The Pay or Perish Game: Why we should stand up against “active discrimination” for the survival of net neutrality

How to cite: M. J. Kumar, “The Pay or Perish Game: Why we should stand up against ‘active discrimination’ for the survival of net neutrality”, IETE Technical Review, Vol.32, No.3, pp.161-163, May-June 2015.

With rapid technological innovations, the internet has, in a short period, grown into a disruptive social force today. The internet has different layers: the content layer (supported by people who develop the content and the applications), the logical layer (consisting of machines which function using algorithms, protocols and standards for the transmission of data packets) and the physical layer (consisting of end point devices such as computers, smart phones and tablets). Two types of people use the internet. Those who simply access the internet for information and the other who develop innovative services and applications using the internet as a medium. The network operators or internet service providers invest in the internet infrastructure consisting of switching and transmission abilities.

There are three primary means to access internet: wireline, wireless and satellite options.  Satellite internet option is least preferred because it is expensive and is no good for ordinary folks like you and me. In wireline option, one can use either a pair of twisted copper wires or a fiber optic network.  Copper telephone lines are too slow when used to access broadband internet. This problem is solved to some extent by using optical fiber up to the neighbourhood node and distributing the signal from the node to the user using the copper lines. But this heterogeneous network can never be a substitute for an all fiber optic network. The experience of accessing broadband internet using a single strand of fiber is unmatched.  Fiber optic networks not only have large data rates (100 Gbytes/s) but are also symmetric i.e. they offer equal speeds either for uploading or downloading.

In the recent past, with the advent of low cost smart mobile phones and tablets, accessing the internet using the wireless or cellular data networks has become a convenient and easy option. Being connected while you are mobile feels great. It is therefore not surprising that globally, the number of mobile-connected devices exceeded the world’s population in 2014. A tenfold increase is expected in the global mobile data traffic between 2014 and 2019. This represents a compound annual growth rate (CAGR) of 57 % from 2014 to 2019. “Smart” devices will be more than 50 % of all devices hooked to the mobile network by 2019.  In India, between 2013 and 2014, the mobile data traffic has increased anywhere from 75 % to 95 % depending on the network operator [1].

Irrespective of the means you use to connect to the internet, the fundamental aspect of internet is its openness which permits equal opportunity of access leading to unbridled freedom to express and to listen. Net neutrality refers to the ability of users to freely choose services and to access the information made available by the content providers [2].  However, internet service providers have the means to cherry-pick services and applications leading to service discrimination [3]. While it is the legitimate right of the network operators to charge the users for the amount of data consumed (e.g. 2 or 3 Gegabytes per month), the issue becomes complicated when they start examining the data packets passing through their networks with an intention to discriminate one user from the other or one traffic destination from the other. This gives them the ability to prioritise or de-prioritise the users and points of traffic origin and destination selectively for commercial reasons rather than technical performance [4]. Since internet service providers are tempted to maximize their profits by prioritising the services, it can lead to selfish behaviors destroying the neutral nature of the internet. This concern is the core of the “network neutrality” debate [5].

Professor Susan Crawford of Harvard Law School, who has written a thought provoking book on the future of high-speed Internet access, believes that if net neutrality disappears the high-speed internet will be accessible only to the rich since it will be beyond the means of most of us [6]. Intense debates about the sustenance of net neutrality have originally been associated with wireline networks in European Union (EU) and the United States.  However, in the recent past, net neutrality in wireless or cellular mobile networks has become a passionately debated topic, particularly in countries like India and elsewhere. Net neutrality is a complex issue in the mobile internet when compared to the wired internet due to both technical reasons and the way the mobile network sector has evolved during the last decade [7].

Wireless networks will never be able to provide the capacity even remotely near that of fiber optic networks, as capacity constraints are intrinsic to the former. Even a migration from 3G to 4G networks will only improve the mobile network capacity by approximately three times. As a result, with the number of mobile users growing, the capacity constraints will ultimately make the wireless networks more crammed when compared to the wireline networks. Until we reach the point of congestion, all the packets of data that pass through the network operators’ servers can be treated equally i.e. what comes in first will go out first irrespective of their origin or destination. This is called the “best efforts system” [8]. In this system, each user gets a momentary access to the maximum bandwidth of the channel using statistical multiplexing. On an average, therefore, no user feels that their experience of using the network is compromised.

It is important to note that  net neutrality should not be misinterpreted as ““every packet must be treated identically.” The network should be indiscriminate with regard to the origin or destination of the data packets. However, based on the need, to take care of the congestion and to provide a fair access to the network resources, the operator can shape the traffic or even drop the data packets [9]. Network operators, therefore, routinely use a technology solution called ‘‘Needs-based discrimination’’ to beat the finite capacity of the network. In this scheme, certain data packets jump the queue and come to the front to deal with the congestion. Without this discrimination, latency sensitive data such as VOIP or media player applications can be affected, impairing the user experience.

However, what is worrisome and what can seriously affect net neutrality is not the “needs based discrimination” but the “active discrimination” of data packets by the network operators. In “active discrimination”, operators can give priority to certain data packets even if the network is not congested. This prioritisation can be based on a prior financial arrangement with an application or content provider. An application provider, therefore, can get preferential treatment by entering into a commercial agreement with a network operator. In a wireless network, it can lead to a very undesirable situation due to capacity constraints.

As an example, let us assume that two thirds of the network operator’s capacity is consumed by a small number of service or application providers, with a financial might for availing the active discrimination. Since the wireless network capacity is limited, the other majority users have to scramble to use the remaining one third of the network capacity.  As a result, the less fortunate application providers or start-up entrepreneurs who cannot pay for this active discrimination will be poorly visible to the internet users.  This preferential access to the wireless internet to the application and service providers with deep pockets can lead to an indirect control of the internet by the network operators. In addition, in a discriminatory regime, since the advertising revenue will always be attractive to the network operators, they may abandon their neutral role and degrade the quality of the non-priority lane to mine greater profits from the priority lanes [10].

Until a technological breakthrough takes place to enhance the wireless network capacity, the “needs based discrimination” of data packets may continue to be an essential part of wireless internet to enhance the user experience. No one should have any complaints about it. However, it is the “active discrimination” of users by the network operators for monetary gains that is detrimental to the very survival of neutral internet.

The question that needs our attention, therefore, is: should the network operators be given a free hand to run the network the way they want ultimately leading to a highly discriminatory and non-neutral internet? If the operators gain complete control of the wireless internet and become discriminatory, will the wireless internet, still remain a social force of unrestrained information exchange as it is today? Will it still encourage and nurture innovative new entrepreneurs and application developers? The answers to these questions should be clear to you by now.

Shouldn’t we, therefore, as academicians and scientists, stand up and use all our legitimate means to stop this “active discrimination” and protect the net neutrality?


  2. J. S. Gans, “Weak versus strong net neutrality”, Journal of Regulatory Economics, vol.47, pp.183–200, 2015.
  3. J. Crowcroft, “Net neutrality: the technical side of the debate: a white paper,” ACM SIGCOMM Computer Communication Review, vol. 37, no. 1, pp. 49–56, 2007.
  4. R. T. B. Ma, D. M. Chiu, J. C. S. Lui, V. Misra, and D. Rubenstein,”On Cooperative Settlement Between Content, Transit, and Eyeball Internet Service Providers,” IEEE/ACM Transactions on Networking, vol.19, no.3, pp.802-815, June 2011.
  5. R. T. B. Ma and V. Misra, “The Public Option: A Nonregulatory Alternative to Network Neutrality”, IEEE/ACM Transactions on Networking, vol.21, no.6, pp.1866-1879, December 2013.
  7. D. Miorandi, I. Carreras, E. Gregori, I. Graham and J. Stewart, “Measuring Net neutrality in Mobile Internet: Towards a Crowdsensing-based Citizen Observatory”, 2013 IEEE International Conference on Communications 2013: Workshop on Beyond Social Networks: Collective Awareness, pp.199.
  8. P. Ganley and B. Allgrove, “Net neutrality: A user’s guide”, Computer Law & Security Report, vol.22, pp. 454–463, 2006.
  9. V. G. Cerf, “Knocking Down Strawmen”, IEEE Internet Computing, vol.18, no.6, pp.88-89, Nov-Dec 2014.
  10. M. Bourreau, F. Kourandi and T. Valletti, “Net Neutrality with Competing Internet Platforms”, The Journal of Industrial Economics, vol.LXIII, pp.30-73, March 2015.
Posted in Education and Research | Tagged , , , | 3 Comments

Global University Rankings: What should India do?

How to cite: M. J. Kumar, “Global University Rankings: What should India do?”, IETE Technical Review, vol.32, no.2, pp.81-83, March-April, 2015.

Before 1983, until the first university ranking in the world was published in U.S. News and World Report, the term “University rankings” was not taken very seriously. But more than a quarter century later, after the first university rankings were announced, India now keenly wants to be part of these rankings. What has changed during these years? With growing economic developments and increasing aspirations, the Indian public seek to know the inside functioning and operational efficiency of our higher educational institutes and universities. Since resources to the universities are made available through tax-payer’s money, it is also the responsibility of the universities to let the public know what is being achieved with these resources. The fact that Indian universities do not find their place among the top universities in the world has become a cause of intense public debate.

But the question is: should we compete to get a ‘respectable’ place in the world rankings or should we have a ranking system suitable to our country? Any ranking system we adopt should enhance the credibility of universities in the eyes of the general public leading to increased public confidence and trust. The Indian universities, like the other 15000 universities in the world, have national obligations to perform. One of the important commitments of a university is to provide equal access to higher education even if the university fails to get into the “global rankings” [1]. However, there is enough evidence to show that higher education has of late become a preserve of the elite. Global rankings have exacerbated the inequalities in deciding who can access good quality higher education [1 – 3]. Let us briefly see what has gone wrong.

We have several widely known ranking systems such as (i) Times Higher Education World University (THE) Rankings, (ii) QS (Quacquarelli Symonds) World University Rankings, (iii) Academic Ranking of World Universities (ARWU), also known as the Shanghai Jiao Tong University Ranking, and (iv) Webometrics Ranking. All these ranking methods do have serious shortcomings [1]. Even the most popular ranking schemes cover only less than 5 % of all the universities in the world. THE and QS rankings give heavy weightage to reputational ranking through online surveys often with a strong bias towards English speaking countries. ARWU and THE use data from Thompson Reuters while QS uses data from Scopus (which is from Elsevier) making these rankings rely heavily on publications and citations in science based journals. Humanities and social sciences remain unrepresented in most ranking schemes. Universities with a main focus in these two areas, therefore, do not figure in the rankings. Rankings are also affected by the ability of universities to garner large endowments exceeding even the national budgets of some countries. The presence of Nobel laureates and Field medal winners who publish in the so-called elite science journals further influence these ratings.

Surprisingly, educational experience and learning outcomes of the students are not surveyed by most ranking schemes. This is a major omission since education is the primary objective of the universities [1]. They completely ignore if the teaching-learning practices in a university are of the highest order while its professors are researchers and unpretentious experts in a wide variety of fields including science. The global ranking schemes, therefore, only manufacture a fancy perception about a university’s reputation rather than represent its true functioning. This makes higher education a sellable commodity to the advantage of a few elite.

We tend to forget that assigning a numerical rank to a university, to measure its performance unfortunately deflects our attention from a university’s fundamental function i.e. education. Periodically, a university’s functioning should be audited. But this should be only to ensure quality of performance and accountability. Evaluation of a university should not degenerate into a single number for public display [1]. Even Indian media has started releasing the rankings of Indian universities but frequently with hilarious outcomes. That is because most media personnel neither understand the nuances of mathematics involved in ranking [1] nor appreciate the complexity of diversities intrinsic to the functioning of a university.

Why is it not desirable to assign a number to a university’s performance? Well, it reinforces the false impression that universities are like corporate entities who are expected to satisfy the consumer interests. Ranking, together with marketing and public relations, has made global higher education a marketable commodity [4]. As a result, by 2025, nearly eight million students are expected to study outside their countries. A bulk of these students come from China, India and other neighbouring countries. India is the second largest exporter of students to foreign universities. The beneficiaries are invariably the universities in English speaking countries [1].

It is estimated that to be in the “world class” of the rankings, a university’s budget must be 1.5 to 2 billion US dollars per year which is beyond the means of many universities in countries like India [5].  University rankings are also mired in controversies due to the fact that many a times, rankings could be influenced by made-up, dicey and false data. When we trust the rankings, rather than professional integrity and peer regulation to evaluate a university’s excellence, this can lead to undesirable results. We must remember that “when universities openly and increasingly pursue commercialization, it powerfully legitimizes and reinforces the pursuit of economic self-interest by students and contributes to the widespread sense among them that they are in college solely to gain career skills and credentials” [1], [6].

University rankings also have affected the administrators, parents and students on ‘what we choose to do, who we try to be, and what we think of ourselves’ in higher education [1], [7].  The use of rankings is now increasingly being seen by many countries as an indirect way of pressurising the universities into “a costly and high-stakes academic arms race” instead of focussing more on the immediate developmental and social needs of a country [8]. Rankings have simply become aids to the potential ‘customers’ who have financial resources to access these highly ranked elite universities [9]. University rankings in general are not neutral methods. They are influenced by politico-ideological technologies to exclude or include a university into the elite club. Rankings, therefore, assign a hierarchized social identity to the university. This substitution of quest for excellence by ultra-eliteness results in a greater stratification and concentration of resources forcing the less fortunate universities into an un-recoverable cycle of disadvantage [10].

This makes the global university rankings notoriously untrustworthy and unsuitable for countries like India. Therefore, even before India thinks of getting into global university rankings, it needs to develop a credible and transparent ranking system which reflects India’s social and national requirements. What are these requirements?

We should recall that universities are public-interest entities [1]. Public trust in a university and its reputation, therefore, depend on the social impact the university makes.  Social impact is directly related to the quality of the talent or skill training provided to the students in a university who will in turn become the back bone of the work force. India today requires a huge skilled work force. How well are our universities prepared to meet this challenge?  To give an example, nearly a million students write the Graduate Aptitude Test in Engineering (GATE) every year to try their luck in getting admission into post-graduate education. However, on an average, only about 15 – 17 % of the candidates qualify in this exam each year. This is a clear indication of the poor quality of educational training they have received in their undergraduate level courses. Our universities have largely failed in talent training and skill development. Their indifference has only made their moral standing weaker in public perception.

We should not lose sight of the fact that a university is not a commercial entity but a place of higher learning, conducting teaching and research at the undergraduate and postgraduate level [9]. Therefore, it is imperative that India should develop an India specific ranking system, which is both transparent and reliable, to assess the performance of Indian universities. This should be based on, for example, (i) the care universities provide in making higher education accessible even to the most under privileged, (ii) the skill or talent training that a university imparts to the students so that they become agents of social and economic transformation, (iii) how well the universities encourage the ability to think in an unorthodox fashion and carry out original research in different fields, (iv) the ability of a university to develop technologies relevant to the local social needs and human development, and so on.

Every few years, drawing from our experience and from other countries which are not in the business of commercializing higher education, we should keep improving our ranking system to advance the academic excellence of the Indian universities. If we focus on the core objectives of our universities i.e. teaching and research, it will not be too far away when we will be talking about the excellence of Indian universities rather than their rankings. When Indian universities become known for their excellence, they should be in a position to attract students and best intellects from across the world, in turn improving their global prestige and recognition. If we can transform our higher educational institutes into great universities for their knowledge and social commitment, what else do we want? Do you agree with me?


  1. K. Lynch, “Control by numbers: new managerialism and ranking in higher education”, Critical Studies in Education, 2014,
  2. T. McCowan, “Is there a universal right to higher education?” British Journal of Educational Studies, vol. 60(2), pp.111–128, 2012.
  3. K. Lynch, B. Grummell and D. Devine, New managerialism in education: Gender, commercialization and carelessness, Basingstoke: Palgrave Macmillan, 2012.
  4. J. Rutherford, “Cultural studies in the corporate university”, Cultural Studies, vol.19(3), pp.297–317, 2005.
  5. E. Hazelkorn, Rankings and the reshaping of higher education: The battle for world class excellence, Basingstoke: Palgrave Macmillan, p.197, 2011.
  6. I. Harkavy, “The role of universities in advancing citizenship and social justice in the 21st century”, Education, Citizenship and Social Justice, vol.1(1), pp.5–37, 2006.
  7. I. Hacking, The taming of chance, Cambridge: Cambridge University Press, 1990.
  8. I. Ordorika and M. Lloyd, “International rankings and the contest for university hegemony”, Journal of Education Policy, 2014, DOI: 10.1080/02680939.2014.979247.
  9. P. Taylor and R. Braddock, “International University Ranking Systems and the Idea of University Excellence”, Journal of Higher Education Policy and Management, vol. 29(3), pp.245-260, 2007.
  10. S. S. Amsler and C. Bolsmann, “University ranking as social exclusion”, British Journal of Sociology of Education, vol. 33(2), pp.283-301, 2012.
Posted in Education and Research | 6 Comments

A Silicon Biristor with Reduced Operating Voltage: Proposal and Analysis

In this paper, using 2D simulations, we report a silicon biristor with reduced operating voltage using the surface accumulation layer transistor (SALTran) concept. The electrical characteristics of the proposed SLATran biristor are simulated and compared with that of a conventional silicon biristor with identical dimensions. The proposed device is optimized with respect to the device parameters to ensure a reasonable latch window while maintaining low latch voltages. Our results demonstrate that the SALTran biristor exhibits a latch-up voltage of 2.14 V and a latch-down voltage of 1.68 V leading to a 57% lower operating voltage compared to the conventional silicon biristor.

The paper is freely downloadable from IEEE Journal of Electron Devices Society

Posted in Abstracts of my Research Work | Tagged | Leave a comment

Vertical Bipolar Charge Plasma Transistor with Buried Metal Layer

A self-aligned vertical Bipolar Charge Plasma Transistor (V-BCPT) with a buried metal layer between undoped silicon and buried oxide of the silicon-on-insulator substrate, is reported in this paper. Using two-dimensional device simulation, the electrical performance of the proposed device is evaluated in detail. Our simulation results demonstrate that the V-BCPT not only has very high current gain but also exhibits highBVCEO · fT product making it highly suitable for mixed signal high speed circuits. The proposed device structure is also suitable for realizing doping-less bipolar charge plasma transistor using compound semiconductors such as GaAs, SiC with low thermal budgets. The device is also immune to non-ideal current crowding effects cropping up at high current densities.

You can download the full paper from


Posted in Abstracts of my Research Work | Leave a comment

Innovation and technology should lead to abundance not scarcity

To appear in IETE Technical Review, Jan-Feb 2015.

How did economies start? In olden days, after meeting our requirements, we shared the surplus with neighbours and relatives. This also helped us bridge relationships necessary for a healthy society. However, when we started using money as a primary medium of transaction, the producer and the consumer became separate entities and began to increasingly exchange scarce man-made goods. As a result, we moved from a situation of abundance (of labour, goodwill, and renewable resources) to scarcity. This problem is compounded by the advent of science and technological innovations [1].

In 1934, Stuart Chase, in The Economy of Abundance, suggested to  imagine the life of people who lived 100 years ago [1], [2]. They were using far less energy than we are using now. They found means of deriving energy from the easily available and renewable resources such as wood, water, wind and animal labour. It was a hard life, nonetheless, socially rewarding and personally satisfying. Nearly two centuries later, we use several hundreds of times more energy than those people. A typical US lifestyle requires ~11 kW/person and in Europe, it is ~3.5 -5.5 kW/person. India and China manage with about 1 kW/person. The world average nowadays is about 2 kW/person. If our energy requirements continue to increase at today’s rate, in future we may require, on an average, no less than 4 kW/person. This translates into about 40 TW for a 10 billion people world [3]. That is mind boggling!

In spite of technological advances and increased consumption of energy, modern life has become very complex and is not any happier than it was a couple of centuries ago. We have become self-centered and devoid of any social concerns. A behavioural neuroscientist would describe us today as curiosity driven and pleasure seeking human animals. Our striving for immediate gratifications and insatiable desires is addictive. This deprives us of the opportunity to be self-aware and prudent and does not let us work towards building an equitable and sustainable society using science and technology as instruments [4].

The basis for being a scientist is rational thinking. Unfortunately, when we are rational, it snatches away from us the neutral values such as love, compassion and sharing which are the foundations for a happy living. Therefore, a rational person becomes a self-interested person. It is a time-honoured fact that a self-interested person cannot contribute to the common good of the society [5].

With the advent of science (and hence, rational thinking) together with money as our medium of transaction, we moved from a situation of abundance to scarcity because wealth accumulation has become our primary focus. Greed (an unwillingness to share) is the shadow of scarcity. Creating scarcity fuels competition. To be competitive, we use innovation and withhold knowledge [1]. Legal restrictions on knowledge lead to monopoly and exploitation by those who hold the knowledge. Hence, we drifted from a position of exchange of surplus to exchange of scarcity idolizing profit driven innovators as the primary vehicles of development. “Abundance has been appropriated by some and squandered by most of us” leading to the mess that we are in [2]. In this situation, those who are rich become richer at the cost of those who are marginal. We seem to have forgotten what Andrew Carnegie, a great philanthropist, once wrote: ‘A man who dies rich dies disgraced’.

A rational and sustainable world is possible only if there is abundance. For innovators to be able to build a sustainable world, we need to create an environment where passion, intrinsic motivation and willingness to share knowledge become core values to the psychology of young innovators. Innovation and technology should be used to make our lives fulfilling and cooperative. We should not let innovation and technology make us more anxious, addictive, fearful, competitive and greedy than ever.

I think we need to sensitize our young innovators to revisit our economic models and strive to build economies of abundance rather than scarcity.


  1. A. Fricker, “Economies of abundance”, Futures, vol.31, pp.271–280, 1999.
  2. S. Chase, The economy of abundance, New York: MacMillan, 1934.
  3. D. Cahen and I. Lubomirsky, “Energy, the global challenge, and materials”, Materials Today, vol.11, no.12, pp.116-120, December 2008.
  4. P. C. Whybrow, “Dangerously addictive: Why we are biologically ill-suited to the riches of modern America”, Neuropsychiatria i Neuropsychologia, vol.4, issue no.3-4, pp.111-115, 2009.
  5. M. Olson, The logic of collective action, Cambridge: Harvard University Press, 1965.
Posted in Education and Research | Tagged | 6 Comments