Return To Index, "Some Urban Thoughts"Vouchers: The Way to True Health Care Reform

POVERTY 101: OVERVIEW, THEORIES AND QUESTIONS

Economic want persists as a social ill only because men do not desire sufficiently that it shall cease . . .
the essential causes of poverty are determinable and its considerable presence unnecessary.

Jacob Hollander

Go back 300 years and poverty wasn’t a subject of inquiry. It was pretty much accepted by salon society (i.e., those few with the wealth and time to ponder the ways of the world) that most people struggled to stay alive. The masses enjoyed few luxuries and often died young due to plagues, injuries, wars, famines and bandit attacks. Since ancient times, an extremely small percentage of the population lived comfortably as royalty, nobles and landholders. A slightly larger group made an adequate living as merchants and tradesmen; everyone else barely survived by living off the land. As long-distance trade expanded over the centuries with better ships, roads, land vehicles, and supporting institutions such as armies and banks, a growing percentage of the world’s population was able to leave agrarian poverty behind for a better life in the cities. However, it was not until the industrial revolution of the 18th and 19th centuries that more than half of the world’s people could be said to live above bare subsistence level.

In England, heart of the Industrial Revolution, the cities grew rapidly as people took advantage of the expanding economic opportunities created by factories, trade and imperialism. However, not everyone found a comfortable life in the cities. Studies by Charles Booth found that about 30% of Londoners lived in poverty between 1887 and 1892, i.e. without sufficient income or wealth to ensure basic safety and health, certainly without access to comfortable clothes, hot water, fresh and tasty foods, etc. Seebohm Roundtree estimated that 28% of the town of York was in poverty in the prosperous year of 1899. Robert Hunter estimated a 20% poverty rate in northern American industrial cities in 1900.

These researchers defined "poverty" using extremely low expectations and standards of living, relative to today’s standards. Estimates of poverty levels in America made by James Tobin using a more contemporary standard, i.e. $3,000 annual income per family in 1965 dollars (about $18,500 per year in 2005 dollars, a bit less than HHS’s 2005 poverty guideline for a family of four), showed a 67% poverty rate in 1896, a 63% poverty rate in 1918, a 51% rate in 1935, shrinking down to 30% in 1950 and 20% in 1960. Only in the Industrial Age did educated persons begin studying the poor, and only after 1960 did “poverty” become a research topic officially recognized by the U.S. government. In other words, only in recent times have we been able to envision poverty as something that can be influenced by public policy, and not as the normal, unchangeable background condition.

The earliest studies of urban poverty recognized that the poor were not all alike, and that there were many causes contributing to a family’s poverty; there were also many possible outcomes depending upon the family’s situation. Nonetheless, prior to the 1960s, a high percentage of poor families had been subject to some combination of sickness, accidents, death of a wage earner, old age, low wages, and loss of income due to industrial changes, e.g. a major employer in a town going out of business. Many of these conditions would be addressed in the coming years, especially during the New Deal of the 1930s and the Great Society of the 1960s, as federal laws encouraged unionization and implemented social insurance programs like Social Security, Food Stamps, housing assistance, unemployment insurance, workplace safety standards, minimum wage laws and Medicaid.

As black and Hispanic families found their way to northern industrial cities after WWI (especially after the mechanical cotton picker eliminated the need for much agrarian labor in the south during the 1940s), racism became an important structural barrier for many impoverished urban families. Various federal laws and court interpretations attempted to address this problem in the 1960s.

Other problems common to poor urban populations in the late 19th and early 20th centuries included low education levels, cultural and language differences (since many urban factory workers were recent immigrants from southern and eastern Europe), crime, and other forms of social disorganization. One noted response was the “settlement house movement”, which provided assistance in finding decent housing, promoting health and sanitary conditions, developing language and cultural skills, and supporting education for children. These programs were funded by donations from industrialists and “old money” families, and attempted to smooth the process of assimilation for these immigrants into a new culture, as to promote the eventual economic success of their children. One of the earliest and most prominent urban settlements was Hull House in Chicago.

The settlement houses often worked closely with social workers, who attempt to address the effects of poverty on families through “social casework”. Social workers generally address and attempt to diminish family conflict, domestic abuse, anxiety, depression and other effects of stressful living conditions imposed by poverty. Social workers obviously can not get to the economic causes of a family’s poverty, but they try to preserve and eventually strengthen the family or household using the tools of psychology, such that poor families might be ready to exploit opportunities to improve their economic conditions once they came along.

Another approach was taken by Saul Alinsky and his Industrial Areas Foundation (IAF), an approach which differed quite radically from the social work paradigm. During the Great Depression and thereafter, Alinsky encouraged poor neighborhoods to “organize” and demand political power and corresponding attention from government. This was a social extension of worker organization in the labor union movement, and as with unions, was a source of direct economic uplift and social cohesion for poor urban neighborhoods. Today, the “community organization” movement is still active through groups like IAF, ACORN, etc. However, due to low home-ownership rates, high unemployment, and high mobility amidst the urban poor, along with social factionalization (i.e. awareness of differences between American blacks, Puerto Rican Hispanics, recent immigrants from Africa, Hispanic immigrants from Mexico, etc.), neighborhood organizing has a harder time "getting traction" in the inner city today.

During the 1920s, college professors in the sociology department at the University of Chicago (and later in other schools) became interested in studying the conditions of the urban poor and developing theories and deeper understandings of their plight. They set themselves in opposition to the social work school, which prepared its practitioners to work directly with the poor. Since that time, a variety of analytical ideas and recommended policy approaches have developed in response to poverty in America. The following paragraphs compare and summarize some of these ideas, and mention various programs that derived their origin from these ideas.

The “Chicago School” sociologists of the 1920s developed a “social ecology” model focusing around assimilation of those in urban poverty into a more established, more affluent surrounding society, via the “natural evolutionary process” of social development. The social ecology model took the community, not the individual, as the unit of analysis. Social work focused, by contrast, on the individual. Even though the Chicago School view eventually evolved toward laissez-faire individualism, which mostly denied a legitimate role for government in assisting the evolutionary process of community assimilation, the focus on community did set the theoretical groundwork for community empowerment as the basis for social change. This groundwork was used by the community action movement of the early 1960s (community action laid the groundwork for the federal War on Poverty of the Johnson administration, and for the Office of Economic Opportunity, a key agency in that "war").

An alternate viewpoint about poverty, which had its origins in the Progressive Era of the late 19th century, focused on unemployment, low wages, institutional racism, institutional classism (i.e., efforts by the rich to keep the poor and middle classes from gaining economic opportunities that might challenge their economic dominance) and the “political economy” as a method of remedy. This viewpoint is sometimes called the “structural approach”, as it focuses on economic and social structures as causes of poverty, structures such as racism that require political remedies (e.g., affirmative action). The concept of "political economy" assumes that politics can influence the economy so that the wealth of the nation can be more fairly distributed. This has led to measures such as progressive income taxation, inheritance taxes, minimum wage laws, labor union protections, right to strike laws, job creation programs, and affirmative action requirements. However, many economists feel that such interventions decrease the overall wealth production of our nation by decaying incentives to work and invest. In recent years, political economy solutions have not been expanded, and seem to have lost most of their political support.

With regard to racism against blacks as a structural contribution to poverty, Swedish economist Gunnar Myrdal performed the definitive study of racism in America. During the late 1930s, Myrdal was retained by the Carnegie Foundation to identify and analyze the effects of racism against blacks in America and its social and economic impacts up through 1940. The study was published in 1944 as An American Dilemma: The Negro Problem and Modern Democracy. Myrdal’s analysis posited a triangle of factors: 1.) economics / income / wealth on the part of black Americans; 2.) intelligence / ambition / health / education / manners / morals on the part of black Americans; 3.) systematic racial discrimination on the part of American society and its institutions. Gunnar concluded that a positive effect in one area has a positive and interactive (multiplicative) effect on the other areas. By the same token, a negative influence on any of these triangle points pulls down the other two points. Therefore, per Myrdal, black poverty is a function of a multitude of factors, including racism, black cultural heritage and norms, historic wealth disadvantages, and personal circumstances.

In the 1950s, anthropologist Oscar Lewis put forth the “culture of poverty” concept. Lewis felt that poverty did cause unique social patterns amidst poor communities, some of which might retard the exploitation of opportunities for economic betterment that present themselves. Lewis’ concept of “culture of poverty” had its origins in political economy; he felt that people did not become poor based on culture or pre-dispositions reflected in their culture; they became poor because of unemployment, low wages, and unfortunate circumstances. However, their poverty could be perpetuated by social and psychological means, per Lewis.

The culture of poverty view was seen at first as a small footnote to the social ecology theory and the structural viewpoint. It helped to explain why the “social ecology” process did not work speedily for blacks and Hispanics in poverty. It also supported those arguing for political economy remedies, as a reason why their remedies were needed (i.e., so as to stop the damage that anti-poor structures were causing). However, in the 1980s and 1990s, "culture" was re-interpreted and amplified by conservative groups to argue that anti-poverty interventions such as welfare in fact were counterproductive, as they induced dependency and vitiated initiative amidst the poor. A leader of this interpretation was Charles Murray, see his book Losing Ground (1984). Under the conservative interpretation, the cultural effects of poverty are very strong and are not simply a function of an impoverished environment; they reflect deep-seated, mostly unchangeable behavioral characteristics that cause poverty. Later on, Murray implied in his book The Bell Curve (1994, co-authored with R. Herrnstein) that blacks ended up in poverty more often than whites and Asians because of genetic traits. Thus, at its farthest point of development, the conservative interpretation of “culture” subsumes genetic pre-destination, thus justifying the abandonment of all anti-poverty measures as futile.

As a result of the conservative re-interpretation of poverty culture within the past decade, the “culture” idea has evolved to subsume concepts such as deviance, pathology, cultural lag, underclass, social disorganization and dependency. Unfortunately, this has made the culture of poverty theory a target of attack for those favoring political economy remedies. It has polarized the discussion of poverty such that admitting to the self-perpetuating effects of poverty is considered politically incorrect by the more liberal factions, even if one contends that such effects reverse themselves when relevant opportunities are presented. William Julius Wilson thus attempted to reclaim the “underclass” concept as a real but temporary side-effect of structural factors (including racism, de-industrialization, concentrations of poverty exacerbated by public housing design, etc.). See Wilson, The Truly Disadvantaged, 1987. Wilson asserts that poverty culture is a side-effect that will dissipate if public policy effectively addresses the structural factors.

Another approach to the poverty debate involves the overall growth of the economy; this is known as the “macroeconomic” approach. This approach became popular during the Kennedy administration, but is still touted today by conservative factions. It posits that poverty is best dealt with through vigorous economic growth. Its motto is that “a rising tide lifts all boats”. However, in 1973, the poverty rate in America reached its lowest point; since then it has drifted up and down, without any trend. By contrast, the gross domestic economic product per person (in inflation-adjusted dollars) has risen about 125 % since 1973 (i.e., more than doubled). While research indicates that over 50% of families living under the poverty level remain there for only a limited time (1 to 5 years), a significant portion has been in poverty for over a decade. Thus, it appears that the macroeconomic approach has reached its limit of effectiveness, although some researchers argue that regional economic growth over time has a powerful effect upon poverty levels in inner-city neighborhoods (see, e.g., Paul Jargowsky, Poverty and Place, 1997).

The concept of “human capital” provides another dimension to the discussion of poverty. Human capital can be roughly translated as education and job skills, as they relate to a worker’s ability to assume a career-track that provides sufficient wages and opportunities for advancement. The deficiencies of public education in the inner city due to local school control and financing, and the influence of the “neighborhood milieu” on students’ work habits and drop-out rates, leaves children raised in impoverished neighborhoods unprepared to assume anything but menial work with little or no upward mobility. The "human capital" approach complemented the macroeconomic approach in the early 1960s, and was used to justify federal investment into education and career training. To some degree, the human capital approach was set against the “political economy” view that unionization and other pro-labor policies were the key to gaining sufficient wages for people from disadvantaged backgrounds.

Although the macroeconomic approach lost much of its relevance over the past decade, human capital has become more important than ever, given the trends of a globalizing, highly technical economy. Modern labor markets have increasingly segmented into “primary” and “secondary” sectors; the first sector requires educational achievement and managerial or professional skills, and provides opportunities for high wages and continual advancement; the second sector is low-paying, non-unionized, and high turnover, with little potential for promotion (e.g, fast food workers, janitors, child care workers, etc.). There are fewer and fewer jobs that fall between these two ends of the spectrum, and there are few ways to move from the secondary to the primary sector. Therefore, support for educational achievement at the elementary and high school level, and expanded opportunities for college or technical training, are very relevant to breaking the chains of intergenerational poverty in poor neighborhoods. Unfortunately, public investments in low-cost, high-quality educational opportunities for the poor have not turned out to be the “magic bullet” for ending poverty, although they do have positive effects for certain individuals.

Another anti-poverty strategy that was touted in the 1960s and 70s was the creation of government-subsidized or government-run job programs targeted at the poor. During the Nixon Administration in the 1970s, a federal government program, the Comprehensive Employment Training Act (CETA) actually did subsidize the creation of public jobs that were targeted at people with “human capital” deficits from impoverished areas. And of course, during the Great Depression of the 1930s, the Federal Emergency Relief Administration and the Work Progress Administration created jobs in order to stimulate the economy and reduce the human suffering caused by high unemployment rates.

Mainstream economists argue that it is a waste of government funds to pay for jobs that the private sector would not itself create and support; they feel it is better to prepare people for productive “real jobs” than to artificially support jobs that don’t contribute much to the overall economy. There is no political support today for widespread job creation, although some job training programs may provide short-term subsidy to private companies to take on a limited number of inexperienced workers, for purposes of gaining technical job experience. Some analysts, however, still call for broad job creation programs targeted at areas of high unemployment, low income and low human capital (see, e.g., William Julius Wilson, When Work Disappears, 1996).

The current “social safety net” in America consists of the following components:

Some of these programs were started during the “New Deal” of the 1930s, but most find their origins in the “Great Society” of the 1960s. The most significant addition in recent times to this line-up is the Employment Income Tax Credit, a very limited program started in the 1970s which was significantly expanded by President Clinton in 1993. The program is part of the tax code, whereby the IRS provides a payment to working families with children who have low wages. For example, a family earning minimum wage (about $11,000 per year) can receive a check for $3,370, an income boost of about 30%. The program is now about as large and significant to low-income communities as food stamps and welfare. It represents the realization of the “negative income tax” scheme that President Nixon floated in the early 70s, which proposed to reduce or eliminate regular welfare (then called AFDC, Aid to Families with Dependent Children) with a tax credit payment to families that worked full time but earned low wages.

With regard to welfare and welfare reform, the federal Aid for Dependent Children program (ADC, which was the predecessor of AFDC) was started in 1935, during the Great Depression. The federal government attempted to help states that had mothers' support programs that could not continue because of overwhelming need and low tax revenues. The program was always a partnership between the state, which operated the program and provided funds, and the federal government, which provided matching funds and placed minimum standards on the states regarding their assistance. Over time, the federal government required that program coverage and benefits be more consistent across all states, although there were (and still are) significant variations between states. In 1950, the program was expanded to provide support for the mother, not just for the children; thus becoming AFDC. Until 1967, the program required a dollar-for-dollar reduction in the grant against any money that the mother earned through employment. There were also provisions to end the grant completely if the father was present in the household, or was providing child support. After 1961, the federal government allowed (but did not require) the states to ease up on this “all or nothing” stance regarding fathers, which arguably discouraged marriage or the father’s involvement. However, many states rejected this option.

Because of improvements in federal benefits requirements made during the “war on poverty” of the 1960s and during Nixon’s unappreciated expansion of federal domestic assistance programs in the early 70s, the number of women and children receiving AFDC assistance went from 3.1 million in 1960 to 10.8 million in 1974. Unfortunately, this expansion of aid provided grist for those in America who promote uncomplimentary images of the poor. Such image mongering focused on the inner city, where the loss of factory jobs and concurrent increases in single-female-parent households impacted many poor black and Hispanic families. Throughout the 1970s and 1980s, politicians representing suburban constituencies called for measures against allegedly high levels of AFDC fraud and abuse, and for requirements that welfare mothers find work. A variety of programs were started to address these issues. But they didn’t do much more than harass certain AFDC recipients. In the mid-1990s, however, the political pressures for “the end of welfare as we know it” became irresistible.

The Personal Responsibility and Work Opportunity Reconciliation Act of 1996 eliminated AFDC and replaced it with TANF, Temporary Assistance for Needy Families. Welfare would now be time-limited, and the parent would be required to seek work (with some assistance regarding child care, job seeking, transportation and substance abuse treatment when needed). PRWORA reflected a concept put forth by Harvard economist David Ellwood called “divide and conquer / make work pay”. In 1983, Ellwood together with economist Mary Jo Bane performed a study of welfare recipients indicating that long-term users were becoming dependent on government support, and that was causing a serious fiscal problem. Although the majority of all welfare recipients were short-term users, over 50% of recipients on the rolls at any one time were long-termers or those who continually returned to the welfare roles. These users accounted for almost 2/3 of welfare costs. This study was instrumental in turning the tide of professional and academic opinion in favor of the need for welfare reform.

In 1988, Ellwood published a book recommending a reform strategy that provided “non-welfare” support for working poor families, including the Earned Income Tax Credit, expanded eligibility for health care support, child care, school breakfasts and lunches, career training and other programs. This was one side of his conceptual “divide”, to “make work pay” for those struggling to raise families with low wages and lack of advancement opportunities within the increasingly globalized, high-tech economy. The other side addressed the dependency that AFDC had supposedly fostered. Ellwood recommended that welfare be time-limited with a lifetime cap, that it make vigorous efforts to gain child support from absent fathers, and that it provide mandatory and aggressive assistance to get jobs for recipients. Once former welfare recipients started working, they could then benefit from the “make work pay” provisions that would support the working poor.

In the early 90s, Ellwood’s plan caught fire amidst moderate conservatives and neo-liberal Democrats affiliated with President Clinton. PRWORA became effective in 1996, and had a dramatic impact. The welfare caseload dropped from about 5 million families getting AFDC in 1994 to 2.2 million under TANF in June, 2000. The employment rate for single mothers rose from 60% in 1994 to 72% in 1999. However, the effects on poor single-parent families from the mild recession of 2001 are only starting to be studied. Critics allege that the “make work pay” side of the bargain has not been fulfilled, given the inadequate resources provided after the 2001 / 2003 / 2005 Bush administration tax cuts. They claim that in the event of a jump in unemployment akin to what America experienced in the early 80s, the safety net will break, casting millions of children into abject poverty and ruined lives.

Various approaches to mitigating poverty include:

The Biggest Question Regarding Poverty in America: Despite an economy that continues to grow, why doesn’t the poverty rate go below 11%, the low point reached in 1973?

From 1947 to 1959, when the estimated poverty level dropped from 33% to 22.4%, the economy’s generation of wealth (gross domestic product) per person in constant, inflation adjusted dollars grew by 26% (about 2% per year). From 1959 to 1973, when the poverty rate dropped from 22.4% to its all time low of 11.1%, the economy’s generation of wealth per person grew by 61% (about 3.2% per year). During the same time, real income per person grew by 59%. From 1973 to 2004, the economy’s generated wealth per person grew by 80% (about 1.9% per year). Real income per person also grew by 75.6% during that period. However, the poverty rate bounced around in the 11 to 15 % range over that time. Why wasn’t there continued progress in eliminating poverty in America?

One theory is that poverty has reached a “natural level” akin to natural unemployment; e.g., the 11 to 15% poverty range of the past 30 years indicates a short-term “waiting room” for immigrants from poor countries and for Americans who have experienced misfortunate events (divorce, injury, a failed business venture, unemployment, etc.). For both groups, the stay in poverty is expected to be short, under 5 years. Arguably, a large portion of the residual poor in America is constantly turning over, with people continually entering and leaving, staying for a short time until they can find and exploit opportunities for increased income and wealth generation. However, it is clear that some portion of those in poverty today are families living in urban areas and in rural zones that have experienced in poverty for several generations.

The issue of distribution of income in America is strongly related to the crisis of poverty. As can be seen in the above chart, the American economy generates more wealth each year; in 2003, it cranked out more than twice per person what it did in 1963. If a set portion of the increase in wealth each year were directed into the pockets of the poor, poverty would fade away. That was the trend until 1973, but since then has not been. Just who is keeping the growing wealth?

The following chart indicates that the American economy works to make the rich richer and the poor poorer; and the middle isn't holding its ground. This is the familiar "quintile chart", showing the percentage of all income that goes per year to all American households, as broken down into five groups arranged according to their rank in income. The lowest fifth (20%) of all households (i.e., the households with the lowest incomes) got 4.1% of all income in 1935. By 1953, more of the nation's income shifted towards the lowest group (4.9%). But after that came a slide, such that today's lowest fifth gets less than what the lowest fifth shared during the Depression (of course, they are still better off than during the Depression because the "economic pie" is so much bigger now; this chart is more about how the pie is sliced than about how big it is).

The trend for the next fifth of households and families is pretty much the same. For the middle fifth, the great "middle class", things got better until the early 70s. Since then, the middle class has been getting less and less of the pie (and even though the pie continues to grow, it doesn't grow fast enough to improve the lot of the middle class; their real income has been more-or-less fixed since 1973). The next 20%, which roughly corresponds to the upper-middle class, did somewhat better than the middle group did; they got more of the pie up through the mid-70s, and held on to that share into the 80s. However, after that they too started getting a lesser cut.

The true beneficiaries of the American economy's growth capacity has been the upper 20%, roughly the "well-off" to the "super-rich". Their share of the pie dropped quite a bit from 1936 to 1973, but then rebounded to just about where it was during the Depression. The overall trend of the rich getting richer continues within the top 20%; the top 10% do better than the second 10%, the top 5% better than the second 5%, the top 1% better than the next 1%, the top 1/2% better then the next 1/2%, etc.

Year

Lowest 5th

Second 5th

Middle 5th

Fourth 5th

Highest 5th

1936

4.1

9.2

14.1

20.9

51.7

1953

4.9

11.3

16.6

22.5

44.7

1961

4.6

11.0

16.4

22.6

45.4

1972

4.1

10.4

17.0

24.5

43.9

1983

4.1

10.0

16.5

24.7

44.7

1993

3.6

9.0

15.1

23.5

48.9

2003

3.4

8.7

14.8

23.4

49.8

2006

3.4

8.6

14.5

22.9

50.5

Do these charts say that our nation has lost its will to eliminate poverty, thus explaining why poverty remains? Or are the conservatives correct in arguing that the long-term, intergenerational poor cannot be helped at any price? Would a vigorous attempt to reduce poverty backfire by decaying the growth rate of the economy - i.e., the equity versus efficiency trade-off? Did our nation ‘s prosperity and expanding safety net do all it could do in the 1940s, 50s and 60s to reduce poverty before hitting a bedrock of "natural turnover” and untreatable situations? Or could a renewed investment of minds and money reduce poverty another 10 points, into the 2 to 5 percent zone, while returning economic benefits that would help our economy to grow in the long run?

Return To Index, "Some Urban Thoughts" 


SOME IMPORTANT LINKS:
Budget of a Family in Poverty (and remember, this family makes 50% more than minimum wage)
Center on Urban Poverty & Social Change (Case Western U.)
Urban Institute
Institute for Research on Poverty (Univ. of Wisconsin)
National Poverty Center (Univ. of Michigan)
US Census Bureau, Poverty Reports
Joint Center for Poverty Research (Northwestern University and University of Chicago)
modestly named "Smart Library on Poverty" (Harvard JFK School)

FOOTNOTE: An interesting book regarding 1.) the overall question of poverty in America, 2.) the often misguided efforts by smart people to find an answer to poverty by academic analysis, 3.) the formation of a poverty analysis industry here in the US during the 1960s and 70s, and 4.) the relatively small place that community development plays in the overall anti-poverty picture is Poverty Knowledge by Alice O'Connor (Princeton: Princeton Univ. Press, 2001). I don't completely agree with Prof. O'Connor's conclusion that "structural inequality" is more important than "behavior and culture of the poor". In fact, I argue that focus on neighborhood-based development should be reduced because of the social and cultural reverberations that occur when a lot of poor people live close by. In the end, I feel that both political structure and cultural behavior must be reckoned with. But Prof. O'Connor's book is certainly an excellent overview of the basic questions and the ultimate objective -- which is to free families and individuals from the debilitation and wasted human potential caused by life in poverty, as opposed to "saving a neighborhood".