Wednesday, July 2, 2008

“Gut feeling” in Real Estate Decision-making

If you ask a developer, investor, lender, or an appraiser how they make a decision to develop, invest, lend, or make a value judgment, you might get an involved, complex answer.  However, a common aspect of all these answers might be the final step which is typically, “the decision ultimately came down to a ‘gut feeling.’”  Gut feeling is an expression people use to explain the decision-making step, usually the final step before the decision is made, in which they have to use judgment to solve an ill-defined problem.   An ill-defined problem is a situation where the parameters of the problem are not clearly structured and there is subsequently no single correct or easy answer.  Another way to express the concept of an ill-defined problem is decision-making under uncertainty. 

The ‘gut-feeling’ concept is particularly important in real estate context because most real estate problems are ill defined and requires human judgment.  We will find that the real estate industry is complex and there is often limited data, which is usually incomplete with poor reliability, with no unifying theories for people to base objective decisions.  Therefore, subjective “gut feeling” is a substitute for an objective solution and subsequently human judgment is an important factor in a real estate decision (to develop, to invest, to lend, etc…).  To understand how real estate decisions are made, we first need a basic understanding of general human information problem solving.

Human problem solving model

Human information problem solving is the process humans go through to develop a solution to a proposed problem.  The fields of physiology and cognitive psychology have abundant research concerning human information processing. Two researchers in particular (Newell and Simon (1972) and Simon (1978)) developed a general 2-stage theory of human information problem solving.  

Simon’s 2-stage theory of human information processing recognizes that problem solving is the interaction between two features, the task environment and a human information processing system consisting of short-term and long-term memory. 

The task environment is the complex external environment in which the decision-maker operates.  The real estate industry is a prime example of a complex task environment.  Real estate decision-makers make judgments within the context of a complex economic, social, governmental, and environmental background.  Just consider the information that needs to be considered in a decision to develop vacant land: zoning, subdivision codes, interest rates, available capital, demographics, topography, drainage, construction costs, traffic counts, availability of utilities, access…to name a few!  Furthermore, real estate data is often unavailable, incomplete, and/or inaccurate, contributing to the complexity of this environment.

The human information processing system consists of two main features: short-term memory and long-term memory. All interaction between the task environment and the human information processing system is filtered through short-term memory, which makes short-term memory a critical component of this system.

Short-term memory functions as a filter because the task environment is complicated, continuous, and information rich, and short-term memory has limited storage capacity and relatively slow processing capability.  We will find that the storage and speed confines of short-term memory impose important limitations on human judgment.   Short-term memory is vital to human information processing not only as the link to the task environment but also because short-term memory is the brain feature and location where human problem solving actually takes place.

Physically, short-term memory is composed of two components, a language inter­preter and a problem space. The language interpreter’s function is to under­stand the problem.  The problem space, which maintains control over short-term memory, is the information processing system’s representation or model of the task environment and is responsible for the problem solving function.

The problem space (appropriately named!) causes the size and speed limitations attributed to short-term memory. First, problem space capacity is limited to approximately four to nine ‘chunks’ or pieces of information derived from either the language interpreter or long-term memory. Second, problem space processes information serially (one chunk at a time), which is a severe limitation on processing speed. These limitations are important characteristics of human information processing and the reason why we are all ‘only human’ and make ‘human mistakes.’ 

In contrast to short-term memory, long-term memory consists of large database like storage, called semantic memory, with an indexing system composed of recognition memory and asso­ciative structures. Semantic memory has unlimited storage capacity; however, the serial recognition memory indexing system is slow and tedious. Associative structures establish “smart” shortcuts to the semantic memory and quicker links to semantic information. 

In a paper published in 1978, Simon explained that this general 2-stage model of human information process­ing is robust for both novice and expert problem solvers who are solving both well-structured (or defined) and ill-structured (or undefined) problems. However, significant differences between novice and expert information processing do exist.  First, in short-term memory, experts form larger and richer data chunks, sometimes referred to as nested chunks, which expands the problem space’s processing capabilities.  Second, novices generally access semantic memory through the slow and serial recognition memory index. However, through experience and practice, experts develop quicker links to semantic memory with the associative structures providing efficient indexing, list structures, and intelligent links.  Novices, with little experience to draw upon, have not yet devel­oped the store of knowledge, both in terms of nested chunks and associative structures, needed to create more efficient “expert” indexing systems. 

No comments: