Return to frontpage
ExploreUnderstandIllumine
Interview

'There are many reasons behind lower growth rates in National Accounts back-series': T.C.A. Anant

Growth Rates of Gross Domestic Product (GDP). Source: Central Statistics Office, Ministry of Statistics & Programme Implementation, Government of India.

The Union government’s rejection of the GDP back series prepared by a technical committee of the National Statistical Commission (NSC) earlier this year, and release of the official back-series by the NITI Aayog and the Central Statistics Office (CSO) on November 28, 2018, have predictably snowballed into a controversy. The estimates released by the NITI Aayog and the CSO make the economic performance of the present National Democratic Alliance (NDA) government led by Prime Minister Narendra Modi, appear far better than that of the United Progressive Alliance (UPA), which was led by Manmohan Singh. Curiously, the official back-series released extends only till the first year of the Manmohan Singh government. Although the revisions carried out are unusually large, the explanations provided by the CSO in support of the databases and deflators chosen are cursory. A number of discomfiting questions have been raised about the exercise by statisticians and economists, including the Chairman of the Prime Minister’s Economic Advisory Council.

Some of the issues that need clarification are why was an incomplete back series released barely months before the Lok Sabha elections, and at a time the government is facing criticism for its management of the economy. But no official answers have come forth. T.C.A. Anant, former Chief Statistician of India (July 1, 2010 to January 31, 2018), talks to Puja Mehra, a senior journalist based in New Delhi, on issues relating to the computation of the back series. He points out the fallacy behind the linkage of the data from the Ministry of Corporate Affairs (MCA) with that from the Annual Survey of Industry (ASI) in its back-series computation, which gave the present data. Anant also advises against politicising the complex issues and calls for modernisation of government data collection. Excerpts from an interview held at the Delhi School of Economics, where he is currently the Head of the Economics Department.

It is said that the back series was vetted by eminent statisticians. Was it vetted by you?

No, no. Not vetted in a strict sense. Formally, the way the National Accounts Division of the CSO [functions] is that there is an Advisory Committee for National Accounts, which consists of both statisticians and economists, and includes representatives from other government agencies like the Reserve Bank of India (RBI). They have been engaging with this question for some time now. This methodology was presented before them. But because the back-series was a complex task, and I will explain later why it became complex, the CSO ran it past some of us to show us the results after the Advisory Committee on National Accounts had cleared the methodology. What we did is similar to what of you all are now doing.  We asked a number of questions like ‘why is this growth different from the one seen earlier?’ the answer to these ‘why’s for the most part lies in the manner in which the back casting was done. The CSO responded to these queries, many of which are encapsulated in its FAQs. 1

This was in October this year?

This initial discussion was sometime in October. As they did not have detailed analyses for all the queries, they said, 'we will prepare them'. We gave suggestions on these as well. This is the reason it got delayed on the first occasion.

But broadly speaking, in so far as any back-series is concerned, backcasting fundamentally means that you take the new methodology which has two different aspects to it. One, they are new data sources. Two, there are different ways of computation. And, you rework the past using them [i.e. the new data sources and different ways of computation].

Backcasting means that you take the new methodology which has new data sources and different ways of computation, and rework the past using them.

Where there are new methods of computation, reworking the past is usually not an issue. When there is a new data source, the problem is ‘what do you do’. In some cases, there is similar data in the past. Where that is the case, they used that [similar data] to project it. But in other cases, similar data did not exist in the past. There the CSO used proxies or splicing. Formally, National Accounts is compiled in compilation categories. So, for each compilation category they have backcast. For each compilation category, if the current data and methodology extends backward, they have extended it backwards. But where current data does not extend, then they have used various proxies, or splicing and so on and so forth. Methodologically, this is absolutely correct. The discussion has been over the outcomes. There has been a lot of concern over lower growth rates. There are many reasons for this phenomenon.

The first reason for lowering is actually the simplest. It's a natural consequence of an approach to backcasting. This is due to the change of base effect. In simple terms, when you go from 90 to 100, the increase is 11 per cent. When you go from 100 to 90, the decrease is 10 per cent.  This happens whenever you change from Laspeyre’s to Paasche’s Index.  This is literally of that form. A somewhat elaborate explanation of this phenomenon has been done by the Chief Economist at SBI [State Bank of India].

The other reasons are there in the CSO's FAQs.

There is one issue in the backcasting – because MCA data [Ministry of Corporate Affairs’ database, MCA-21, of annual corporate filings] is not available in the past. Therefore, what they have done for estimation of corporate sector is that they have continued to use the same approach that they used earlier [in the 2004-05 series].

Yes, they have used Annual Survey of Industries (ASI) data where they did not have the MCA data. But that’s the question: Can they combine MCA data with ASI data in the way they have? Should they stick to only one database?

Ideally speaking, you can’t [combine MCA data with ASI data]. It’s simply not yet possible. The reason is that you don’t have a sense of the nature of linkage between the MCA and the type of data you had in the past. That’s because you don’t have enough data. The MCA is a comprehensive database of corporate accounts. Prior to MCA becoming available, there were some time series of corporate data available from listed companies. Either through the Bombay Stock Exchange or other agencies. The RBI has also been producing studies on corporate performance with a little bit better structure. This has both listed and some unlisted companies in it.

The trouble is in trying to understand the relationship between these data sources and the coverage of the MCA. Here, the character of the relationship changes. In some years, the listed companies did better than the average. In other years, the listed companies did worse than the average of the total. That’s visibly the case even now. The problem is we do not as yet have a clear understanding of how this will be going backwards.

Approximations have been done.

Yes, that is correct. This has enabled them to produce a time series. As such that is O.K. What is important for users to recognise is that any such exercise has natural limitations and to not push comparability beyond that.

Would you say that 2011-12, the earliest year for which MCA data has been used, has become a sort of an artificial break in the time series? Are pre-and-post 2011-12 really comparable in the back series?

It’s not an artificial break. Let me explain. This backcasting allows you to compare backwards. In so far as sectors other than the corporate sector are concerned, they have dealt with them quite adequately.  Other than the use of MCA data, there were other points of discontinuity between the 2004-05 series and the 2011-12 series. This approach has dealt with them quite reasonably.

That leaves the issue of MCA versus the older methodology. Here there is some disconnect. But even here the proxy they are using is a reasonable proxy. In manufacturing, though ASI does not have the same quality of coverage as MCA, it is certainly the best alternative available.

For the remainder, the RBI sample is what they have used. Again, this is reasonable choice, and its great advantage is that it allows you to link all the way backward to the past. The essential logic of this choice is that this was what used to be done then [in the 2004-05 series]. In effect, this choice by the CSO essentially validates their earlier approach when MCA was not available. There is probably some disconnect here.

It is a disconnect for which we will probably never get a perfect answer and will just have to live with it.

But it is a disconnect for which we will probably never get a perfect answer and will just have to live with it. Rest assured, a similar disconnect will take place when we go in for the next base revision and an issue of comparing between the GST [Goods & Services Tax] database and the old tax database arises.

For the backcasted series to stop at 2004-05 and not go back all along to 1950-51, as had been done in the previous backcastings…

…For the 2004-05 series, they backcasted till 1999-2000 and then spliced the rest [till 1950-51]. The reason is, beyond a point, segment-wise going backwards can’t be done.

The nature of the economy is also changing, so probably it can't be done …

…Yes, fundamentally, all of these times series are, therefore, artificial.

How representative are they then of the economy?

They are representative. But they are noisy representatives. The trouble is the longer the time series, the poorer is the degree of assessment. Let me try to explain in brief. The link between what you call the real economy and the statistics is mediated through a set of institutions which themselves are not directly observable.

To illustrate the manner in which we conduct business changes depending on the institutional environment we encounter. As the institutional character of the economy changes, the relationship between the measured outcome and the true economy changes.

So, if I were to take GDP measurement today and pick GDP measurement for the 19 th  century or 17 th  century and ask how reflective of the difference between the two economies, the answer is: not really because underlying institutional structure of those economies were very different.

As institutional change is not being captured when we do times series analysis, we make an assumption that there is institutional stability. The fact is, if I took you back 200 years and somehow through a time machine and inserted you into that economy, you would not be able to function in that society. Not because of GDP levels, but because the nature of the economy and its institutions was very different. The attitude to women, attitude to caste, attitude to relationships, all of these are different. The real economy is an outcome of these as well.

As institutional change is not being captured when we do times series analysis, we make an assumption that there is institutional stability.

In effect, what I am measuring [i.e. GDP] is a contrived measure. I am personally not a very great believer in long time-series analyses. It’s alright to do this in short time horizons and assume institutional structures remain more or less stable. In long time horizons, it is really an absurdity.

What you are saying is very important because increasingly the trend is for researchers and commentators, even economic historians, to say that two hundred years ago the GDP was this or that and inequality was so and so…

…It’s meaningless. This is my personal take.

Intuitively, one can see that there's no way the GDP 200 years ago…

…This is true for each one of these parameters. Economists are very bad at coming to terms with institutional change.

To come back to the back series, the CSO has not even used the MCA data till 2006, which is when it was launched. They have used it only till 2011-12. Why so?

The MCA-21 database [of corporate filings] was started in 2006. It was later made mandatory for an expanding domain of companies based on variety of criteria. But, initially, it was available for a very narrow set. Coverage was one problem. The coverage has more or less stabilised because around 2011-12, they made it applicable to all corporates.

Improvements introduced between 2006 and 2010 made it possible to use the data in the manner in which we now do.

Secondly, they incorporated e-forms and XBRL filing 2 , etc., only later. Initially, companies were allowed to file by uploading scanned PDFs. There were issues on data extraction. For these reasons, early MCA filings were not very usable for statistical work. The improvements introduced between 2006 and 2010 made it possible to use the data in the manner in which we now do. This was an area where the CSO worked closely with the MCA to improve statistical usability of the data.

The second issue being raised, including by Dr Bibek Debroy, Chairman, Prime Minister’s Economic Advisory Council, is regarding the deflators used in the backcasting.

Let me tell you what has happened. It’s not that the underlying time series of deflators has changed significantly, because the new series is using the new CPI [Consumer Price Index inflation and] in the backcasting they have used CPI (IW) [the Consumer Price Index for Industrial Workers]. That is not much of a difference because if you look at the correlation between the CPI and the CPI (IW), the two series are highly correlated.

194256058jpg

T.C.A. Anant. Photo: Shanker Chakravarty

There are, however, two changes. One, sectoral weights. Sectoral weights have changed because the sectoral composition has changed. If you look between the old series and the new series, in just one year, 2011-12, the shift in weights from trade towards manufacturing is quite a lot. Also, away from the informal sector to the formal sector. Implicit consequence of this shift in weights is that the deflators will change.

The second factor is what I had referred to earlier as the Index Number Problem of switching from Laspeyres Index to Paasche Index. In this case, if we had information to adjust the base every year, we could start chaining our indices, but that is still not feasible because of the long gaps between benchmark years.

So, it is not correct to infer that the growth rates reduced in the back series purely on account of the choice of deflators used? Is this criticism correct?

No, I don’t think so. Fact is, when we did the exercise in 2011-12, we had shown that there was significant overestimation in the old series. If you look at it sectorally, there was very sharp reduction in trade, very sharp reduction in the formal sector and increase in manufacturing. The reasons for each of these are different. The reason for bringing down trade was methodology related. The reason for the increase in manufacturing was partly the methodology and partly the MCA data. In a nutshell, if I ignored everything else, just the fact that there was a level reduction meant that the growth rate must have been lower on the average between the 2004-05 [series] and the 2011-12 [series].

Why was the old series overestimating trade and the informal sector?

There were separate elements in trade and in the informal sector. In trade [sector], it was because of the use of the GTI [Gross Trading Income] and there is an FAQ on that 3 . Now they are using a sales tax indicator. GTI overestimated [the Gross Value Added (GVA) in the trade sector] for a variety of reasons. Partly the long gap between the previous benchmark survey and the next benchmark survey. But it also overestimated this GVA, in part, because the GTI was derived from an output aggregate, which was aggregating output of agriculture, trade and mining.  Unfortunately, mining was one of those things that saw a huge boom towards the last phase before the financial crisis. Most of it was mining for export purposes and so would not have contributed to value added by trade. The informal sector story is more complicated and has been explained elsewhere [in the press note].

This also showed up in the investment and saving estimates of the household sector that grew faster in this period than in the corporate sector.

That was for the same reason. Once you get a boom in the production side, the saving and investment data is derived. So the consequences are there in the expenditure side estimates. Some of these are accounts based, so there is not much change, but in others, being residual in character, the change can be quite consequential.

To return to deflators for a moment. Is the UN System of National Accounts, (SNA 2008)   compliant to use WPI deflators and different deflators for different components of GDP?

Using different deflators for different components is SNA compliant. Ideally, SNA would not want you to deflate nominal numbers. It would like you to get, ideally, disaggregated volume data. Unfortunately, since we do not get volume data and price data at that disaggregated level, the best we can get is more aggregative value data which is then deflated. Looking ahead, one of the great advantages of the GST database is that it can be mined at the establishment level. It will give you both volume and value data at a very disaggregated level. But that’s the future. The present method is therefore recognised by the SNA as an option.

Looking ahead, one of the great advantages of the GST database is that it can be mined at the establishment level.

In simple terms, the SNA is like a menu. There's the recommended option which is say the 'chef special'. And there are other things in the menu which are possible. What you mean when you say it is SNA compliant, it means it is there is the menu. We are not the only ones who do this. Everybody else also do similar things. The preferred way to do it is to do it as disaggregated as possible. But in practise you do the best you can.   

So, generating statistics – whether in the old or the new series – involves judgement calls by the statisticians.

Yes, judgement calls in the sense of which indicator to apply. For example, I don't have price indices for a range of services. For those services some proxy price is used. In some cases, it is CPI. In other cases, they use some WPI or some component of WPI. For backcasting deflators, whatever is used in 2011-12 [series] forward calculation is also used in the backcasting as well. On that there is no serious change. The underlying price series is coherent. What has changed are the sectoral compositions and the differing levels of aggregation and disaggregation at which deflators are applied.

What has changed are the sectoral compositions and the differing levels of aggregation and disaggregation at which deflators are applied.

Let me give you one example where this has happened. In corporate GVA [Gross Value Added], earlier you were building it up from establishment level data. In establishment level data, we could apply much more disaggregated price indices more appropriate to the establishment level output. But now, since we use corporate GVA, which is much more aggregative, also because the breakdown [from the MCA] is not as granular as the ASI breakdown, the deflator which is now applied to build up the GVA is coarser than what was done earlier. This creates some differences, but these are inherent in such technical work.

How is it that the CSO has this back series out in three months, when you did not do it for three years?

The issue was taking a decision relating to the best substitute for MCA. This was a call which needed to be taken after doing all your homework which they have presumably now done.

Were you not in favour of replacing MCA with ASI?

I was still trying to see if we can build a better model with corporate estimates. That was not working. Whichever way you looked at corporate data, whether from the RBI, or elsewhere, you could not link it with MCA.

This has become a source of discomfort to the critics of the backcasting, who say that since Prof. T.C.A. Anant did not agree to replacing the MCA with the ASI in the past, the CSO has erred in doing so now. This is also a question of the varying degrees of confidence on individuals in contrast to the system as a whole. It is observed in the discourse on the Reserve Bank of India, or the office of the Chief Economic Advisor as well…

This is because we don't pay attention to processes. CSO has been working at this and has been open to advice suggestions and technical inputs. That is why even SNA non-compliant suggestions such as the one made in Mundle Report were examined. 4

To return to the methodology used by the CSO in the backcasting, is splicing SNA non-compliant?

Yes, splicing at highly aggregate levels is non-compliant. From its point, what CSO has done is enough. It’s an honest effort. They have documented everything. When they come out with the release, you should be aware of the fact that this is what they have done, and then take a call on how best to use the results. Then, may be over time when long time-series become available you will be able to recognise that there is some element of a structural break which still persists. That is bound to happen.

As I said, beyond a point, time-series data has so many limitations that asking for perfection is futile.

The CSO has used Sales Tax data for backcasting unorganised trade. Unorganised trade does not really pay taxes.

It does and does not pay taxes. Ideally, you should have an annual trade survey. I am given to believe, going ahead, the NSS [National Sample Survey] informal sector survey will be done every year. This means we will get trade surveys every year. That will solve this problem much better.

Earlier, we had occasional trade surveys. Then we applied proxies for GVA in trade. The output proxy works in short periods of time. But when used over a decade, where the structure of retail trade itself was changing very dramatically, that output proxy became very poor. The tax proxy is better. At least in the forward casting, it is the taxes paid by the non-corporate segment of trade which is used.  Here we net out the Sales Tax paid by the corporate sector. That’s a reasonable proxy for turnover if you like in retail trade.

GST will be even better because it has a system of netting which ensures that everyone gets mapped in. Sales Tax had also been converted into some form of Value Added Tax [VAT] in most States 2006 onwards. So, we are better off with Sales Tax. Annual trade surveys would be even better. Now that they have been approved, we will be through this problem also in about five years.

Just to go back to splicing—why is it not advisable?

This is a complex question. To answer this, we need to understand what does splicing do.

First, why does one time series differ from another. This is because of the method of computation, the data sources and so on. When we do splicing, we fundamentally make an assumption that the two series are essentially alike, and the difference between them can be adjusted over the splicing period.

But where the change between series is more fundamental because of changes in coverage, institutional processes etc, splicing becomes a poor solution. Ideally, we should account for the difference and try to find out a suitable replacement.

Secondly, even if you have to splice, you are better off doing it on a disaggregated basis, not an aggregate splicing. That is, splice with each homogenous segment which has maintained continuity.

In our case, there were many sources of discontinuity. Consider what happened with the informal sector. Where there was a compositional shift under way away from self-employment work. Since the old series methodology was insensitive to this compositional shift, it was mis-estimating informal sector value added. Now that we are more sensitive to the fact that self-employment is different from wage labour that source of mis-estimation is gone. In this case, pure splicing will not work. What is needed is to rework the past accounting for the compositional change in workforce.

What is needed is to rework the past accounting for the compositional change in workforce.

Then there are elements where the mis-estimation will vary over the business cycle. The difficulty with our estimates of trade are one such example.

A third type of issue arises due to the switch from ASI to MCA database. The ASI is an establishment level database. The MCA is an enterprise level database T. What has been happening is a change in the level of conglomeration that has taken place in manufacturing from 1993 to 2009. Further, there is increased diversity in corporate activity. Earlier, corporates were much more narrowly specialised. Now you are seeing the rise of corporates which are more diverse. This makes the disconnect between the corporate value added and establishment level value added even sharper. These type of changes make splicing more complicated.

Did these result in overestimation in the 2004-05 series?

These changes led to both underestimation in some cases and over estimation in other. In so far as corporate value added is concerned, the 2004-05 series was actually underestimating value added. To explain this, please recall in estimating GVA for corporate manufacturing, we used ASI. ASI covers only manufacturing establishments. But that creates a problem.

To illustrate, consider a company which has many manufacturing establishments across different States. That company may also have non-manufacturing establishments as part of the company’s make up. The total corporate GVA is going to be the sum of manufacturing and non-manufacturing establishments. ASI will only cover the manufacturing establishments. The problem with non-manufacturing GVA inside this corporate enterprise is not adequately captured because ASI excludes it and the NSS surveys of service sector enterprises would also not cover these adequately. So, our estimates missed some part of the GVA.

The problem with this disconnect between the older and the newer methods of GDP compilation is that we don’t know how this mis-coverage varied over the business cycle. So, it could lead to over estimation or underestimation of growth.

So, this is a source of discomfort with the backcasting …

This is a source of discomfort because the method of classification which is used in the new series is institutionally complete. The method of compilation in the old series was not institutionally complete. There were possible holes left in the manner the compilation was done. The backcasting may have left those holes in place [for the period prior to 2011-12].

Is the Advisory Committee okay with the backcasting because we hear that members did not attend the meeting and did not respond to the minutes circulated?

The best answer I can give is to refer you an episode in ‘Yes Prime Minister’, where Sir Humphrey says: “what happened in the committee is what is recorded in the official minutes….”

We need some measure of moderation in the way we analyse institutions. We are excessively prone to hyperbole. We are excessively prone to politicising discussions. I think it will be best if we stay focussed on the technical aspects of what is being done and not the politics

Specifically, looking at things from the prism of politics will not help us appreciate some of the challenges that confront us in the field of official statistics.

When it comes to the statistics side of government, we are not getting significant advantage from modernisation.

One major challenge that arises from the fact that government is being modernised very rapidly. Different wings of government are adopting to modes of e-governance in a big way. One big example of this right now is GST. In addition to GST, many departments are using e-governance in different ways. Thus, for instance, departments are proactively using social media to keep track of how public is using and demanding services. Many of them have used online portals for delivery of services. But somehow, when it comes to the statistics side of government, we are not getting significant advantage from this modernisation.

Take GST for example. At present, the way it is being used in statistics is traditional. The Department of Revenue is generating reports based on tax collections. Those reports are being used by the Statistics Office in its statistics compilation. This certainly improves data quality because the data which is now coming is more integrated – previously the Statistics Office would take the tax data from the Central government and separately from the state governments; now you are getting it from a single integrated source – so that improvement has already taken place. But fact is that behind it is a much richer database which they are not yet tapping fully. I suspect like with MCA that will happen eventually.

What is this rich database?

What exactly does the GST entail? Every entity that is filing and paying taxes is filing returns. All of this is being done electronically. The basic entity that is filing a GST return is an establishment. So, we know in a fairly tight geographical area what economic activity is taking place. This means we have the potential for the first time to build from an establishment-level data a picture of the transactions.

It is technically possible to match these transactions. For example, if a particular entity produces and another entity is buying. The selling entity and the buying entity file returns. It is possible to match these and get a picture of the nature of the transaction taking place. What we don’t have and what is needed along with this is a statistical framework which would take advantage of this and talk to the people generating this data as to what sort of statistics is possible, create a user community which can talk about the possibilities of this. It’s what needs to be done.

The challenge of the statistics establishment is that a lot of these developments are taking place much slower than the pace of change in government. In the MCA, for instance, we were involved with the development process, but it could have been much faster had the integration between the statistics and the corporate affairs been better.

Will the GST filing system need changes for fully capturing this rich data?

I don’t think so. But a statistical back office certainly needs to be created. System of data validation needs to be developed. Let me give you a distinction between the way a statistician validates data and a revenue person validates data. A revenue person principally validates from the view point of the revenue implications. If something has no revenue implications, they don’t concern themselves too much about the inconsistencies. Increasingly, as the rate of taxation becomes flat, their basic concern becomes coverage. Whether everyone has filed the return.

But consistency in what was reported this time, what was reported by the seller…putting the same codes… do they make sense. Or this time report production of one unit and next time report production of a different unit because there was carelessness in data entry… these are things which the statistical office gets concerned about. Closer dialogue will improve the quality of filing and usability of the data.

Feedback between DGCIS and Customs created a certain amount of validation.

As an illustrative example: Many years ago, we created a statistics office in Commerce Ministry called DGCIS [Directorate General of Commercial Intelligence & Statistics] which would work with trade data in partnership with the customs agencies. They would look at the data from the point of view which statisticians look at in terms of the commodity composition of trade and fairly detailed commodity classification and so on so forth. That feedback between DGCIS and Customs created a certain amount of validation. It’s not that the forms have changed very much.

Can a research community be built among users?

Research can be built, but that is a more challenging problem because tax data is inherently subject to a lot of privacy. In addition to individual privacy, there are concerns of commercial confidentiality. Access to researchers will have to be in a manner which protects all those concerns. Modalities of doing that are there. Other countries have done it. It’s still possible within the government for the Statistics Office to analyse it on behalf of researchers and generate broad trends which could be much richer than what is currently being done. That’s fairly a big challenge. It would need government to give it much higher priority. Now the problem here is that the producers of data is the Revenue Department, but the potential beneficiaries are scattered all over government. There is a need to create dialogue between them.

Who are these potential users?

They could be anybody… the Reserve Bank of India, the Department of Economic Affairs [in the Finance Ministry], Commerce Ministry… there are possibilities of looking at what the trade implications are. May be other ministries as well. It could be Handlooms, Health. Creating that interest group… because what you don’t know [is] what you can get. That is the challenge.

[ Puja Mehra  is a Delhi-based journalist. She won the Ramnath Goenka Excellence in Journalism Award in 2008 and 2009 for her stories on the impact of the Lehman Brothers' collapse-triggered financial meltdown and the subsequent global economic downturn on the Indian economy. She was formerly a Senior Deputy Editor at The Hindu.  She can be contacted at  [email protected] ].

Notes and references:

[All URLs were last accessed on December 14, 2018]

1. Press Note on National Accounts Statistics Back-Series 2004-05 to 2011-12 base 2011-12N [PDF 1.11 MB]. Return To text.

2. Ministry of Corporate Affairs, Government of India. 2018 . " What is XBRL ? ". [http://www.mca.gov.in/XBRL/WhatisXBRL.html]. Return to Text.

3.  Press Note on National Accounts Statistics Back-Series 2004-05 to 2011-12 base 2011-12N [PDF 1.11 MB]. Return to Text.

4. A report by a technical committee of the National Statistical Commission that has also back casted the GDP series. The computation gave substantially higher GDP growth rates for the years in the Manmohan Singh government’s tenure. The report was hastily dismissed by the Modi government as unofficial. Return to Text.

5. For more information on the difference between the old and the new methods read, Nagaraj, R. 2018 . " Why factory output figures are suspect ", The Hindu BusinessLine , September 16. [https://www.thehindubusinessline.com/opinion/why-factory-output-figures-are-suspect/article24961684.ece]. Return to Text.

Download PDF [342 KB]

  1. Comments will be moderated.
  2. Comments that are abusive, personal, incendiary or irrelevant cannot be published.
  3. Please write complete sentences. Do not type comments in all capital letters, or in all lower case letters, or using abbreviated text. (example: u cannot substitute for you, d is not 'the', n is not 'and').
  4. We may remove hyperlinks within comments.
  5. Please use a genuine email ID and provide your name, to avoid rejection.

TOPICS