U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

De Brigard F, Sinnott-Armstrong W, editors. Neuroscience and Philosophy. Cambridge (MA): MIT Press; 2022.

Cover of Neuroscience and Philosophy

Neuroscience and Philosophy.

11 memory structure and cognitive maps.

Sarah Robins , Sara Aronowitz , and Arjen Stolk .

11.1. Introduction

Over the course of any given day, we are exposed to vast amounts of information, and yet our memory systems are capable of encoding and later retrieving this information. This would be difficult, if not impossible, unless the stored information were structured—that is, organized across various dimensions such as space, time, and semantic content. The use of structure to facilitate effective retrieval can be thought of as a general mnemonic activity, both in terms of the sub-personal processes that organize the memory system and in terms of the personal-level strategies that we can use to intentionally facilitate recall of particular pieces of information ( Aronowitz, 2018 ). Cognitive scientists interested in memory have thus long been focused on investigations of memory structure. How do we organize information and experiences so as to make subsequent retrieval possible?

A common way to conceptualize memory structures in the cognitive sciences is as a cognitive map. Cognitive maps, in the most literal sense, are mental representations that are structured in a way that reflects the features of real space and which aid in navigation. Grounding the structure of memory systems in this basic and general ability that is conserved across a wide range of species has obvious appeal. Cognitive maps thus offer hope of theoretical and interspecies unity, as well as the opportunity to learn more about the structure of human memory by investigating the neural systems and behavior of model organisms such as rats and mice, where more extensive and precise interventions are available.

Cognitive maps also present a puzzle. The appeal to these maps begins literally: as an account of how spatial information is represented. Their intended use, however, is more ambitious. Cognitive maps are meant to scale up and provide the basis for our more sophisticated memory capacities (e.g., Bellmund et al., 2018 ). Our memory systems, as well as those of animals, surely represent a variety of nonspatial information, and at least in humans, some of this information is richly conceptual and linguistic. The extension is not meant to be metaphorical, but the sense in which these richer mental structures are supposed to remain map-like is rarely made explicit. How precisely is this process of scaling up meant to go? How do cognitive maps represent nonspatial information? There are a range of ways that generalization and abstraction could occur, each of which comes with a unique set of empirical consequences and a distinct view of mental representation and memory structure. Each, too, comes with a set of particular concerns and challenges. Our aim in this chapter is not to defend any particular view, but instead to provide a framework for exploring the available options. This project is important for the neuroscience of memory because clarifying what cognitive maps represent and why has consequences for the methodology of identifying cognitive maps, the relationship between kinds of information in memory, and the relationship between memory and other forms of cognition. From a philosophical perspective, thinking carefully about cognitive maps is a window into understanding the nature of mental representation in memory and cognition more broadly. It would be an understatement to say that the nature of perceptual representations has attracted serious philosophical interest—and yet, the corresponding question in the philosophy of memory remains understudied. We also hope that this chapter can shed light on a debate about map-like forms of representation more generally (e.g., Camp, 2007 , 2018 ; Rescorla, 2009 ).

A few caveats: the aim of this chapter is to understand what cognitive maps are and how they are incorporated into memory research. As such, we will not start by defining a cognitive map. Instead, we’ll consider empirical work that appeals to this concept, taking note of definitions given by others along the way, attempting to derive a working definition that fits at least the majority of this research. We do not intend our review of this empirical work to be exhaustive. When determining what to include, our primary focus is on the views of cognitive maps that have been developed into accounts of memory structure. We recognize, but do not discuss, the extensive literature on cognitive maps as competing models of spatial navigation and representation in animal cognition (see Bermudez, 1998 , and Rescorla, 2017 , for reviews).

We begin, in section 11.2, with a survey of two traditions. The first of these traditions is the foundational work on cognitive maps that assumes these maps represent information in a spatial structure. The second is a review of alternative, nonspatial representational structures. From the former, we identify a set of themes widely shared by proponents of cognitive maps. From the latter, we extract general lessons for accounts of cognitive structure. With this background, in section 11.3, we turn to several cutting-edge projects that are engaged in the task of scaling up cognitive maps so as to accommodate nonspatial information. These projects each do so in interestingly different ways. Some kinds of nonspatial information may also be represented in a maplike form because they are organized along dimensions that are substantially analogous to spatial information. In other cases, nonspatial information is represented as an abstraction from spatial information. And still other cognitive maps embed nonspatial information in a spatial structure. Putting these cases alongside one another reveals the variety of options available for building cognitive maps and the distinctive limitations of each. We conclude by reflecting on where these results take us in terms of understanding the place of cognitive maps in memory.

11.2. Foundational Work on Cognitive Structures

11.2.1. cognitive maps as spatial structures.

Thinking of cognitive structures in terms of cognitive maps has a long history in psychology and neuroscience. The view began as an explanation of maze-running abilities in rats and, over time, has developed and changed as it has been used to capture a range of activities, from semantic knowledge structures to the navigational expertise of London taxi drivers ( Collins & Loftus, 1975 ; Maguire, Frackowiak, & Frith, 1997 ). Throughout, theorists have aimed to make connections between these abilities in experimental animals and humans, but they have offered subtly different accounts of why these maps exist, what’s essential to their structure, and how the extension from basic neural structure to broader human competences is characterized.

Tolman (1948) is often identified as originating the idea of a cognitive map in this literature.1 Tolman’s account of the cognitive map emerged from his work on maze running and spatial learning in rats—the dominant method and experimental framework in early twentieth-century psychology. For Tolman, cognitive maps were part of an argument that explaining the navigational abilities of rats required more cognitive, representational structure than was allowed for by the stimulus–response approach, which was dominant at the time. Specifically, Tolman documented rats’ ability to learn shortcuts in mazes—an ability inexplicable in terms of the animal’s learned association with particular places in the maze as individual stimuli. Tolman further observed that rats were capable of latent or non-reinforced learning. That is, rats that were simply allowed to explore mazes while fully fed—not receiving nor wanting any reinforcement for their exploration— were able to learn routes through the maze. In order to explain this behavior, Tolman (1948) argued, the rat needed to be credited with the possession of a cognitive map that provided a “field map of the environment” (p. 192). Although Tolman’s evidence was based on the maze-running behavior of rats, the cognitive maps that he posited to explain this behavior were intended to apply to a much wider range of cognitive creatures. Indeed, his 1948 paper was titled “Cognitive Maps in Rats and Men.” The paper even concludes with a few pages of speculation on how particular features of human personality and social organization may be explicable within this framework.

Tolman’s (1948) initial proposal was solidified into a theory of neural structure with the publication of O’Keefe and Nadel’s Hippocampus as a Cognitive Map (1978). For O’Keefe and Nadel, cognitive maps were not simply a general framework for thinking about mental structure; the cognitive map was posed as a theory of hippocampal function. It was the first such proposal, and a highly systematic one, which helps to explain both the initial excitement about the idea and its lasting influence. There are further differences between Tolman’s use of the term and use in O’Keefe and Nadel’s framework, the latter of which serves as the basis of “cognitive map theory” as it is now understood. First, O’Keefe and Nadel take the notion of a map far more literally than Tolman. The claim is not that the information processing of the hippocampus can be understood as map-like or spatial-ish, but rather that these cognitive maps are inherently spatial. In putting forward their theory, O’Keefe and Nadel make continued, explicit appeal to the Kantian idea of spatial structures as an organizing feature of cognition. These spatial maps are considered innate structures endemic to all cognitive creatures. Second, the extension of these maps to humans is not a metaphorical abstraction from the idea of a spatial map, but instead is characterized as an expansion of the kind of inputs that the spatial system can incorporate and process. This is best illustrated by their account of cognitive maps in humans: “the left hippocampus in humans functions in semantic mapping , while the right hippocampus retains the spatial mapping function seen in infra-humans. On this view, species differences in hippocampal function reflect changes in the inputs to the mapping system, rather than major changes in its mode of operation” ( O’Keefe & Nadel, 1978 , p. 3).

The centerpiece of cognitive map theory is the discovery of place cells ( O’Keefe & Dostrovsky, 1971 ): neurons in the hippocampus that fire preferentially—that is, exhibiting a burst of action potentials—in response to a specific location in the organism’s environment. When a rat navigates a maze, for example, some place cells fire at the beginning of the maze, others at the first fork, still others at the second fork, and so on. These place cells are organized topographically, so that their collective firing pattern reflects the rat’s route. After the maze has been run, the pattern is rehearsed, establishing a “map” that allows the rat to navigate this environment more easily the next time it is encountered.

The discovery of grid cells further enriches our understanding of the maps created by the hippocampal system ( Hafting et al., 2005 ). Grid cells are found in the medial entorhinal cortex and, in contrast to hippocampal place cells, fire at multiple regularly spaced locations in the environment. Seen over the rat’s trajectory, the spatial firing patterns of these cells provide a grid-like representation of the organism’s environment. Other cells select for additional elements of the map—for example, cells that track objects, landmarks, and other agents ( Høydal et al., 2019 ); head direction cells that fire selectively based on the way the organism’s head is oriented relative to its route ( Taube, Muller, & Rank, 1990 ); and those that encode information about the distance to borders and edges ( Solstad et al., 2008 ).

In our brief survey of this work, we want to highlight two important features of this literature as we see it. First, even though work on cognitive maps and memory structure has been done mostly with rodents and has focused on low-level neural structure, the intent has always been to make claims about the role of such maps in cognitive creatures more general. That is, the aim was not simply to move away from overly simplistic stimulus– response models of nonhuman animal learning, but rather to think about the cognitive structures available to these nonhuman animals in terms of a framework that would encompass cognitive processes and cognitive creatures more generally. How the cross-species and beyond-spatial generalizations of the framework are envisioned differs across particular accounts and interpretations of cognitive maps.

Second, cognitive map theory remains influential and controversial. The framework continues to serve as a serious guide to inquiry into neural structure, especially with regards to the hippocampus ( Redish, 1999 ; Bellmund et al., 2018 ). The view also serves as a steady target for alternative conceptions of neural structure and cognitive processing. Many of these criticisms involve claims and/or evidence that the information represented in these “maps” is nonspatial, including, for instance, findings of hippocampal cells encoding temporal context ( Eichenbaum, 2014 ). That such interpretations of the content of cognitive maps are available is certain. Whether this sense of cognitive maps is an extension of the original framework or an objection to it is more contentious. Answering this question depends on asking, first, how the notion of “map” should be understood. We address this in section 11.3.

11.2.2. Nonspatial Cognitive Structures

Cognitive psychology has, since its beginnings, been interested in how humans and other cognitive creatures organize their vast amounts of knowledge so as to support efficient and effective search and retrieval. Although some of these cognitive structures are referred to as “maps,” in such cases the term is being stretched to nonstandard or metaphorical use. In this section, we’ll survey some of the foundational work on structures in cognitive science that do not seem to be map-like and do not primarily encode spatial content.

We’ll start with emergent conceptual structures ( figure 11.1A and B ). These are ways of organizing and relating information that emerge from amassing overlapping conceptual content and acquire their particular structures from patterns in the accumulated information. For example, this type of memory structure is often thought to support language comprehension and production. Adele Goldberg (2019) presents a view of language intended to explain the differences between the following kinds of minimal pair (p. 57):

Figure 11.1

Conceptual structures: A and B represent highly dimensional spaces, as described by Goldberg (2019), which combine semantic information with other dimensions such as syntax and phonetics with dark lines representing phrase frequency. We find “cried (more...)

I’ll cry myself to sleep.

I’ll cry myself asleep.

The former sentence is perfectly felicitous, whereas native speakers judge the latter to be odd. Perhaps sentence 2 is odd because it is novel or unusual, but as Goldberg notes, we are perfectly happy with unusual sentences like this (p. 76):

3. She’d smiled herself an upgrade.

Goldberg explains what is special about sentence 2 by appealing to the role of long-term memory organization. On her account, we encode much of the language we hear in a high-dimensional conceptual space that is structured by syntactic form, meaning (in context), phonetic features, and so on. Since there are systematic relationships between many of these features, over time, clusters emerge. Bits of language that are encountered more frequently are selectively strengthened, whereas the connection between a word or phrase and its initial context is weakened if it then fails to occur in similar contexts. We have only noisy, implicit access to this space. Thus, the problem with sentence 2 is that it is close to a stronger competitor, sentence 1. Conversely, sentence 3, while unusual, does not trigger us to recall a more common alternative formulation, and so we judge it to be felicitous.

This example helps us extract several key features of an emergent conceptual structure. Over time, in the case of language, certain features of language stand out as principal components. For instance, in English, sentences with the form of the double-object construction—for example, she (x) passed him (y) something (z)—almost always have the meaning that x causes y to receive z. This is a regularity that relates sentential form to semantic content, and crucially Goldberg (2019) argues that this regularity arises in memory without any need to learn an explicit rule or start off with innate knowledge. Instead, the emergent conceptual structure allows for all kinds of regularities between a wide variety of features to be learned over sufficient exposure. Such a structure must therefore be (1) highly dimensional in order to catch the relevant features, (b) associative in order to relate these features flexibly without a prior model, and (c) content addressable in order to utilize stored information to produce responses efficiently.

A second kind of nonspatial structure is a graphical model, a family of models within which we will focus on Bayesian networks ( figure 11.1C ). In the case we just considered, the regularity between the double-object construction and the type of causal-agential content was represented as a clustering or association. We neither represent the syntactic structure as dependent on the semantic content nor vice versa; the association is not specific enough to represent anything more than organization in terms of similarity. But in a Bayesian network, relationships between features are represented as a set of conditional (in)dependencies. For example, I might encode information about academic lectures as in figure 11.1C .

In this model, the nodes are random variables, and the edges represent dependencies (sets of local probability models). This allows us to assume that a node is independent of any other node, conditional on its parents: for instance, conditional on “conference quality,” “speaker style” is independent of “talk content”. Notice that in the above graph, an edge connects “conference quality” to “talk content,” but it seems unlikely that “conference quality” is a cause of “talk content.” A narrower class of these models, causal Bayesian networks, interprets dependence and independence causally. Consequently, laws of causality can be applied to the graph structures, such as transitivity, asymmetry, and nonreflexivity. As is, this graphical representation is completely equivalent to an enumeration of the local probability models. However, when treated as a representation structure, the graphical representation can have properties not shared by the set of local models. For instance, we might search the graphical representation with an algorithm designed specifically for search in a graph, which would produce different behavior than search over other forms of representing the same information (e.g., Dechter & Mateescu, 2007 ). Whether it is true that conceptual knowledge is in fact represented with , in addition to representable by , graphical models, this hypothesis provides an interesting model of nonspatial mental structures.

Glymour (2001) analyzes a wide swath of human cognition in terms of causal Bayesian networks. These representations have three functions on his account: control, prediction, and discovery. To varying degrees, these functions could be fulfilled just as well, no matter the format of the probabilistic and causal information. However, the graphical format is significant as soon as the thinker employing the models is not perfect. Graphical representations figure directly in heuristics and inductive biases, such as a preference for an explanation that appeals to fewer causes ( Lombrozo, 2007 ). Graphical representations allow simple access to points of potential intervention ( Gopnik et al., 2004 ). As we noted above, we can define distinctive algorithms for search over graphical representations, and both noise and lesions to the model will operate differently, depending on representational format.

Thus, our second class of nonspatial models, causal Bayesian networks, is used to represent all kinds of causal knowledge. These representations function to identify interventions, predict new outcomes, and enumerate new possible theories. Bayesian networks are well suited to performing these functions because they organize information according to principles (i.e., the principles of causation) that (1) apply to the entire domain and (2) align with our interests in manipulating the environment.

Emergent conceptual structures and causal Bayesian networks are both structures that have been posited as operant in memory. Neither of these structures is in any notable way spatial or especially suited for spatial information. Both of these structures are functional: they are thought to have certain features that map onto computational advantages for the thinkers who employ them. Emergent conceptual structures are more flexible than causal Bayesian networks, since the latter can only represent causal relationships, whereas the former can represent non-causal associations. Correspondingly, causal Bayesian networks can express a more complex set of relationships within the causal domain, differentiating cause and effect, and identifying potentially complex causal patterns. Emergent conceptual structures represent many features in exactly one relationship: similarity.

Considering these two structures leaves us with a few key takeaways. First, even for models of abstract, domain-bridging, and perhaps distinctively human knowledge, cognitive structures are still thought to be tailored to particular functions. Second, there seems to be a trade-off between the generality of a representation (i.e., the kinds of features it could in principle represent) and its inferential power (i.e., the conclusions that can be derived from the connections among representational subunits). When data and processing limitations are held fixed, we could either utilize a structure with more flexible (and hence weak) connections or with less flexible (but more inferentially generative) connecting links. This idea is fairly intuitive, following from a more general connection between flexibility and informativeness. In the case of emergent conceptual structures, we saw an advantage of flexibility at work: Goldberg’s (2019) model allows speakers not just to track semantic or syntactic patterns separately, but also to combine all the information we have about a string of language and thereby to learn patterns of association that crosscut traditional linguistic categories. Causal Bayesian networks displayed one of the advantages of inferential power: by representing causal structures in graphs, we made the task of determining points of intervention vastly easier. These cases offer helpful comparisons for considering how to manage these trade-offs in characterizing the functions of spatial structures that serve as the basis for cognitive maps.

11.3. Cognitive Maps and Nonspatial Information

The foregoing sections divided recent and historical work on memory structures into two categories: spatial (or spatially grounded) cognitive maps and nonspatial cognitive structures. In this section, we’ll look at how the line between them can be blurred, such that cognitive maps might be used to encode less obviously spatial information. Specifically, we ask how minimal the spatial format can be while still leaving us with a meaningful notion of a cognitive map as a particular kind of functional memory structure.

To do so, we require a more extensive understanding of the basic notion of a map from which the idea of a cognitive map is generated. There is no canonical account of cartographic representation available, but we can provide a sketch by building off of a set of features proposed by Rescorla (2017) . For Rescorla, maps (1) represent geometric aspects of physical space,2 (2) have veridicality conditions, (3) have geometric structure, and (4) are veridical only if they replicate salient geometric aspects of the region being represented. Cognitive maps are, then, maps in a strict sense when they consist of mental representations with these properties.

However, the definition proposed by Rescorla will not capture crucial elements of the spatial maps we’ve already discussed, since his definition focuses solely on synchronic, intrinsic features.3 The kind of cognitive maps we’ve surveyed are also used in navigation, and interpreted and updated accordingly.4 This addition is crucial: a representation that has all the right internal properties but is never used in navigation is not really a map—and likewise with one that does not even potentially keep step with changes of information about the environment. Combining this functional role with Rescorla’s conditions also lets us derive a fairly distinctive feature of maps, both cognitive and otherwise: we often update a piece of a map, such as a representation of the rooms on my floor, without even assessing a possible re-mapping of global relations, such as the distance between my room and Samarkand. We’ll call this feature “locality.”

When extending the notion of cognitive maps to nonspatial information, we relax the definition to capture a more general (or perhaps analogical) sense of map. Most directly, point 1 will always be false because the information being represented is not spatial. This will require, in turn, changes to how the veridicality conditions in point 4 are understood and what navigation might mean.

In this section, we consider three ways of extending the cognitive map. Each involves a distinct way of modifying the first condition on cartographic representations—that is, what is being represented: (1) encoding content that is nonspatial but in some sense isomorphic to spatial content; (2) encoding content that is an abstraction over first-order spatial information; and (3) embedding nonspatial information within a spatial context.

Before we begin, there is a caveat. Our interest in this section is in representations of nonspatial information that are in some sense utilizing the map-like representations traditionally associated with space (for a description of these parameters, see O’Keefe, 1991 ). This neither entails nor follows from a relationship between the neural-level realizers of spatial and nonspatial representations. Howard Eichenbaum and associated scholars have long tried to challenge the cognitive map picture by pointing to nonspatial uses of neural resources thought to be part of cognitive maps. For example, by showing that cells in the hippocampus represent events within their temporal context and not just their spatial context (see also MacDonald et al., 2011 ; Aronov, Nevers, & Tank, 2017 ; Wood, Dudchenko, & Eichenbaum, 1999 ). Thus, this line of research is not a case of extending the cognitive map to encompass nonspatial information, so long as the claim is about a shared neural substrate rather than a shared representational structure.

11.3.1. Spatial-Isomorphic Information

Spatial-isomorphic content is a kind of content that is structured according to dimensions that functionally correspond to spatial dimensions. By functional correspondence, we mean that the regularities, limitations, and inference patterns that we commonly apply to spatial dimensions will for the most part apply to these nonspatial dimensions. For example, (Euclidian) spatial distance is symmetric: if my office is ten feet from the coffee machine, then the coffee machine is ten feet from my office.5 Spatialisomorphic content, since its dimensions functionally correspond to spatial dimensions, will tend to have a “distance-like” measure that is symmetric in the same way. It seems reasonable that, were we to have a dedicated cognitive mapping system for dealing with spatial content, this system might also be used for dealing with spatial-isomorphic content.

Constantinescu, O’Reilly, and Behrens (2016) offer some preliminary evidence that some of the core processes for representing spatial maps can be used for spatial-isomorphic content. Unlike related work on spatial representations of nonspatial information (e.g., Tavares et al., 2015 ), the authors went beyond neurally co-locating spatial and nonspatial activity. Instead, they focused on a signature of spatial representation: coding of a space into a hexagonal lattice, such that the rate of cell firing corresponds to the orientation of movement relative to the orientation of the lattice. Because strongest firing occurs at 60° increments in orientation, the 360° of phase space are divided into six identical regions, giving rise to the lattice’s hexagonal symmetry.6 The authors looked for this hexagonal symmetry as a mark of what are sometimes called human grid cells. Unlike the grid cells discussed in section 11.2.1, these neurons are not thought to be restricted to regions of the medial temporal lobe but instead are thought to occur throughout (some of) the brain regions that also form the default mode network, including the ventromedial prefrontal and posterior cingulate cortex. Still, previous work has associated these more distributed cells with spatial representations (e.g., Doeller, Barry, & Burgess, 2010 ). Rather than a strictly spatial task, Constantinescu and colleagues (2016) taught participants a pattern of association between the appearance of a bird and a set of symbolic cues. The bird figure varied according to neck height and leg lengths, which allowed for a representation of possible bird figures in a two-dimensional space structured by these two features. The bird-cue relationships were chosen so that each cue picked out a single region of this “bird space.” The authors indeed found hexagonally symmetric responses (measured in fMRI) in a variety of default mode brain regions that seemed to correspond to hexagonal, grid-like representations of “bird space.”

The bird space used in this study was spatial-isomorphic, since it was structured according to two dimensions (neck height and leg length) that could be used to carve up a feature space with several space-like functional dimensions: it was a two-dimensional Euclidean space, with distance and orientation operating just as they would in a real space. Intuitively, the bird space is space-like in that it articulates a “conceptual space,” but also space-like in that neck height and leg length are themselves literally spatial dimensions. However, the design of this study allows Constantinescu and colleagues (2016) to differentiate between these two spatial aspects of the stimulus: because the bird space and the bird’s features in regular space are two distinct spaces, moving through each would produce different patterns of symmetrical activation. Since the stimuli were carefully chosen to avoid passing through phase space in the same way, the observed symmetries in fMRI signal should not reflect the bird’s position in visual space.

The use of hexagonal coding itself, if the authors are correct, suggests a second kind of isomorphism. Hexagonal coding is thought to be optimal for spatial representation in particular. Mathis, Stemmler, and Herz (2015) , for example, present an optimal model that ranks hexagonal coding highest for spatial resolution in the two-dimensional plane. In ordinary space, we don’t normally privilege one dimension over another—that is, the north– south axis is not in general more informative than the east–west axis. This allows us to value spatial resolution uniformly across the plane. But we do typically privilege those two axes over the up–down axis in navigation. These two features must be assumed in order to show the hexagonal lattice is optimal in the spatial domain. Neither feature needs to obtain in conceptual space. For instance, resolution in the bird neck-height dimension may be more valuable than information in the bird leg-length dimension. Were this to be true, the hexagonal symmetries observed by Constantinescu and colleagues (2016) would reflect a suboptimal representation. And so we can conclude that the use of a hexagonal symmetry code either reflects (a) a genuine isomorphism between the conceptual space and real space, or (b) a representational choice that favors spatial-isomorphism over customization to the optimal division of conceptual space.

Another kind of spatial isomorphism centers on temporal rather than conceptual structure. Researchers commonly motivate the division of a temporal sequence into parts by analogy with the division of a spatial layout into parts. For instance, Zacks and Swallow (2007) write:

For quite a while, psychologists have known that in order to recognize or understand an object people often segment it into its spatial parts (e.g., Biederman, 1987). A new body of research has shown that just as segmenting in space is important for understanding objects, segmenting in time is important for understanding events (p. 83).

This literature on event segmentation asks how and why we draw boundaries between events. While Zacks and Swallow take the process of segmentation to be somewhat automatic, DuBrow and colleagues (2017) present contrasting evidence suggesting that segmentation can be active, abrupt, and driven by top-down goals.

Is this use of space merely a helpful metaphor, or is event structure genuinely spatial-isomorphic? One genuine isomorphism comes from the local structure of both representations—that is, a ubiquitous feature of cognitive maps is their locality. While I have a clear idea of how things in my apartment complex are oriented, and a good idea of how things in the Philadelphia Museum of Art are oriented, I do not necessarily have a joint map that neatly connects the two. Kuipers (1982 , 2007 ) views this as a key starting assumption of cognitive maps even in machines: breaking a map into smaller, local maps allows the agent to remain noncommittal about global connections. This locality of representation seems to hold for temporal segmentation as well. Upon hearing a story, I might build a temporal “map” of the events of my friend’s adventure last week without forming any particular representation of how the details of the events she is describing fit into a temporal sequence of my own schedule last week. Locality naturally arises from the use of schemas. Baldassano, Hasson, and Norman (2018) found that temporal boundaries in event schemas across different kinds of events had a common neural signature, provided they shared an abstract schematic structure—that is, schematic representations impose a local structure relative to the device of the schema itself (e.g., from when you enter a restaurant to when you pay the check). Anchoring event segmentation in local (temporal) structure, then, creates an abstract isomorphism with spatial maps, which are anchored in local (spatial) structures.

We could point to a long tradition locating isomorphisms between space and time, tracing at least back to Kant (1781/1787, A33/B49–50). The strength of this tradition, however, is a double-edged sword. The abundance of spatial language used in our everyday talk about time makes it hard to conceive genuinely of the capacity to represent space and the capacity to represent time as distinct. The question of isomorphism between space and time may, from this perspective, be ill formed if the two capacities are more than accidentally linked to one another.

In summary, one way to extend the core notion of a cognitive map to nonspatial information is to treat the nonspatial information as spatialisomorphic. These expansions are most efficient in cases where the nonspatial domain has significant regularities that mirror regularities that compose our representations of space, such as a symmetrical distance measure, roughly equal value assigned to discriminability among the dimensions on a two-dimensional plane, and a representation of related “spaces” that can be composed locally and independently.

11.3.2. Abstractions over Spatial Information

Another way to extend cognitive map theory—integrating work on neurallevel spatial maps and cognitive-level structure—is to explore ways in which the neural systems that support cognitive maps can process and represent abstractions from spatial information. Here, we consider two kinds of spatial abstraction: (1) a structure where the abstraction itself is still isomorphic to the lower-order representation of space, and (2) abstractions over space that are no longer spatial-isomorphic.

Michael Hasselmo (2011) has used cognitive map theory—with its place, grid, and head direction cells—to build a map-based account of episodic memory. In keeping with key themes of cognitive map theory, Hasselmo’s theory is derived largely from work with rats but is meant to provide an account of episodic memory that can scale to humans. His book-length articulation of the view is entitled How We Remember , and the “we” here encompasses all mammals with similar hippocampal structure. Critical to Hasselmo’s particular version of cognitive map theory is the idea that the hippocampus and surrounding structures are a phase-coding mechanism, where the map-making activity of place and grid cells is integrated into maps of the environment at multiple spatial and temporal scales—that is, the hippocampus produces a series of cognitive maps, in Rescorla’s (2009) loose sense, representing the environment in more or less detail, as a function of the scale imposed by the cells from which they are activated. Together, these maps represent a particular event or experience, serving as the content of an episodic memory. To support the idea of multiple maps, Hasselmo incorporates neurons from the entire hippocampus into his model rather than focusing primarily on the dorsal portions of the hippocampus, as is common in much of the literature.

Hasselmo argues that neurons across the hippocampus share the mapping function. The differences between dorsal and ventral neurons are a matter of the size of their receptive fields not their general function. As one moves across the hippocampus, from dorsal to ventral, the receptive field size of the neurons increases. This increase in receptive field size results in a comparable increase in the scalar proportion of the map. Maps featuring place cells with the smallest receptive fields represent the organism’s immediate surroundings in detail, whereas larger maps are produced by place cells with larger receptive fields, situating the experience within its (increasingly broad) spatial and temporal context. Importantly, the broadest “maps” may remain spatial in only the loosest or most metaphorical sense, situating the event within a social, conceptual, or experiential context.

The result is a mechanism that produces representations rich enough to support episodic remembering. The existence of multiple maps allows for a single episode to be recorded at several “scales of experience” ( Hasselmo, 2008 ), capturing the episode as occurring not only at a particular place and time but as associated with various objects and events. For example, consider my episodic memory of walking from my campus office to the university library to return a book this morning. On Hasselmo’s view, the representation of this episode is a conjoined set of maps of the event at different scales of experience. We can think of the smallest-scale map of the event in terms of traditional cognitive map approaches—as an allocentric map of the campus, along with my route from my office to the library. But other maps associated with this episode will represent this event at different spatial, temporal, and contextual scales. The more abstract spatial maps may represent campus with relation to the part of town, city, state or continent in which I live. More abstract temporal maps will represent my route through this map as part of my schedule for the day, or schedule for the week, or activities characteristic of this time in the academic year. Further contextual maps will also be available, where the items represented in the map situate the landmarks along the route on different contextual scales—for example this trip to the library as a stage in a particular research project, trees along this route at this time of the year, campus construction at this time, and so on.

Hasselmo’s model proposes that the cognitive map system can process increasingly abstract characterizations of space and time that can then serve as the content for more elaborate and higher-order episodic memories. However, his hierarchical picture would seem to preserve some degree of structural similarity between levels. On the other hand, Behrens and colleagues (2018) also provide an account of abstraction from first-order spatial information in the form of eigenvectors corresponding to transformations between firstorder spatial (and nonspatial) environments. Unlike a hierarchical, nested representation of experience, an eigenvector is an abstraction that does not share a structure with its first-order counterparts. Eigenvectors fall into a broader class discussed by Behrens and colleagues (2018) , including inductive biases and factorizations. These are all features applying to a set of environments or state spaces that aid in learning but seem to require additional representational resources.

The authors argue for a common set of abstractive capacities operating over both spatial and nonspatial representation, which would utilize higher-order features to drive first-order prediction and planning. Presumably, whatever representational resources would be needed to supplement first-order maps with these higher-order features must be integrated tightly with the first-order maps themselves. Behrens and colleagues (2018) provide a few suggestions, but how this integration might work is still very much an open question.

11.3.3. Embedding Nonspatial Information in a Spatial Format

A third way to expand spatial representations would be to keep the spatial structure intact and embed nonspatial information within that structure. The most prominent example of memory success, the method of loci (MoL), can be understood as organizing information in this way. In the MoL, subjects typically memorize a list of unstructured items such as random words or phone numbers by imagining these items in a structured environment such as a familiar childhood walk through the neighborhood. The MoL thus appears to be a way of using a useful feature of spatial memory to store nonspatial content. The explicit process in the MoL involves two stages: (1) a strategy for encoding items through visualization, and (2) a strategy for retrieving items through a parallel visualization. For example, I could encode a list by imagining a walk through my childhood home, and then later recall the items by imagining walking through the home again and picking up the items.

Questions about how and why mnemonic structure works have received very little attention from memory theorists and scientists. The scant evidence that exists is, however, suggestive and intriguing, and invites us to ask more detailed questions about memory structure. Both the testimony of memory champions and some preliminary studies of expert performance reveal that success in the use of mnemonics does not require any particular level of intelligence or distinct cognitive skills. Instead, the key to success using mnemonic techniques is simply practice ( Ericsson, 2003 ; Wilding & Valentine, 1997 ). These behavioral reports are complemented by neuroimaging studies indicating that those who use these techniques regularly differ from controls in functional but not structural brain features ( Maguire et al., 2003 ; Raz et al., 2009 ). A recent and intriguing paper showed that after only six weeks of training, cognitively matched novices exhibited the same functional changes seen in memory champions ( Dresler et al., 2017 ). Similarly, Yoon, Ericsson, and Donatelli (2018) have just shown that a person trained to increase their digit span to more than 100 digits thirty years ago has retained many of the associated skills, despite undertaking no training in the meantime.

In the majority of these cases, the items to be memorized do not have an interesting nonspatial structure themselves (e.g., digit span, presentation order of a shuffled deck of cards). However, looking more closely at the history of this mnemonic technique reveals that it has also been used for semantically structured information. In De Oratore, Cicero recommended the technique for memorizing speeches, lyric poems, and the like. The

Classicist Minchin (2001 , p. x) argued that Homer used this method to compose and perform the Iliad and Odyssey . In the Middle Ages, it was common for monks to use this technique to memorize the Bible’s 150 Psalms, as well as long passages from other scholarly texts ( Carruthers, 2008 ). This is continued in some forms of contemporary use, as when students use such techniques to help them remember conceptual blocks of information ( Kerr & Neisser, 1983 ; Roediger, 1980 ; Wang & Thomas, 2000 ). The ability to achieve additional mnemonic gains by translating information that is already semantically or conceptually structured into a spatial format suggests that there is something about spatial structure in particular that is useful for memory retrieval.

We have surveyed three ways in which spatial maps might be “scaled up” to accommodate nonspatial information. First, spatial structures might be repurposed to represent nonspatial content with a suitably isomorphic structure: for instance, a conceptual “space” can be modeled as a two-dimensional Euclidean plane. Second, spatial structures might be used to represent abstractions over spatial information, whether in the form of higher-order but still spatially structured representations, or with summary statistics that are to some degree represented jointly with traditional maps. Third, nonspatial information might be embedded in a spatial map, overriding or supplementing nonspatial structure with an exogenous spatial structure.

Taking stock of these results in light of the two traditions discussed in section 11.2, we can draw two key conclusions. First, following on from Tolman’s original essay, the idea of seeing how far the basic components of the cognitive map can be stretched to fit nonspatial content is still very much an open research question. Second, the trade-off between flexibility and inferential power that we observed in the case of conceptual representations characterizes the question of expanding the cognitive map as well. The more we stretch the notion, the more we reduce inferential power in favor of flexibility.

11.4. Why Extend the Cognitive Map?

The work that we have surveyed in this chapter testifies to a persistent interest in understanding the edges of applicability of the concept of a cognitive map. Given the difficulties of extending the cognitive map concept to nonspatial information, why are researchers pursuing this program? What benefits come from thinking about the structure of mental representations within this framework?

One reason to extend the cognitive map concept is in service of the broader project of tying cognitive functions to evolutionary history. Neuroscientists have long argued for functional and anatomical homologies across a broad range of mammalian species (e.g., Clark & Squire, 2013 , but see also Zhao, 2018 ). As a theory of hippocampal function, cognitive maps offer the potential for an explanation of navigational and cognitive systems across all species in which these basic anatomical structures are preserved. A cognitive map, in the core spatial context, links a somewhat general capacity for long-term spatial memory to a more obviously adaptive capacity for spatial navigation. Likewise, a model on which some kinds of conceptual memory utilize a cognitive map provides a potential basis for understanding the evolution of conceptual memory. Viewed from this perspective, the cognitive map project embodies an explanatory aim: not just to have accurate models of mental faculties, but also to find models that explain how a creature with our lineage could have developed these faculties. Under this aim, the three types of extension we discussed have divergent implications. Spatialisomorphic extension is a way of reusing structures that were tailored for a particular spatial purpose for new tasks that happen to have similar structure. Extension by abstraction, on the other hand, is a richer ability applied to the very same core domain. In comparison with these two, embedding nonspatial content in space appears more like a culturally developed trick that makes clever use of evolutionarily adapted structures. In short, there is a story to tell for each way the cognitive map could have become extended, but there are important differences in how this story goes in each case.

Another reason to persist with cognitive maps may be a general commitment to parsimony across cognitive systems. An enriched and extended notion of cognitive maps could provide the representational basis for a broad range of cognitive processes, providing a competitor to the logically structured language of thought, as developed primarily by Fodor (1975 , 1987) and Pylyshyn (1984) . An example of such a project has been explicitly advanced for spatial maps by Rescorla (2009) . In fact, one might think of the work on mental models done by Johnson-Laird (1983) as a version of this kind of proposal, though his mental models are not as distinctively spatial.7 The more forms of memory, and cognition more broadly, that can be understood from within this basic framework, the more streamlined and unified our understanding of cognitive systems becomes. This idea of parsimony as part of the explanatory aim in understanding the mind is notably different from a concern with parsimony within the mind as a matter of representational efficiency, though both might motivate an interest in conserving structures across cognitive domains.

As we noted in section 11.2.2, however, parsimony has consequences for the trade-off between flexibility and informativeness—that is, the more cognitive subsystems employ the same structures, the more flexibly (and thereby less informative) the structures must be. Again, much depends on which form of generalizing to the nonspatial we adopt. Under spatialisomorphic extension, there is a genuine form of parsimony in applying similar structures to isomorphic domains. But the utility of this approach will be limited to the number of domains that share these structures. There is little reason to hope or expect that all domains will be spatial-isomorphic in the requisite ways. The same argument might be harder to leverage for the other two kinds of extension. The discussion of the models by Hasselmo and by Behrens and colleagues revealed that abstractions of spatial information might be spatial-isomorphic, but they need not be. Embedding nonspatial information in maps likewise would not be predicted by the idea that related functions should in general utilize related structures. Instead, to motivate embeddings of this kind, we’d need a much stronger notion of parsimony—for example, that an explanation with fewer structures should be preferred even when it explains across distinct functions. This stronger form of parsimony would of course undermine the previous rationale under which we prefer a one-to-one fit of structure to function, since it encodes a preference for a one-to-many fit.

In the cognitive science tradition, memory has long served as one of the best candidates for inter-level integration. It is often, literally, the textbook model of how the various cognitive sciences can all contribute to a shared understanding of a particular cognitive process (e.g., Bermudez, 2014 ). Similarly, philosophers of neuroscience use memory as a case study in mechanistic levels of explanation ( Craver, 2007 ). The appeal of cognitive map theory can be understood as a commitment to this aspirational model. The origins of cognitive map theory reflect this. O’Keefe and Nadel were inspired by the relatively simultaneous discovery of place cells, long-term potentiation, and other cellular and molecular processes in the hippocampus and the observation of the patient H. M.’s loss of episodic memory in response to hippocampal damage during neurosurgery. The promise of inter-level integration relies on a shared understanding of the cognitive process at each of the cascading levels in the model. Each version of cognitive map theory’s extension offers a different account of the system whose levels of understanding and inquiry are being integrated. On the spatial-isomorphic interpretation of the extension, the system is one for representing regular and symmetric structures, spatial or otherwise. On the abstraction approach, spatial navigation is the core cognitive function or ability of this inter-level system. On the embedding approach, it is declarative remembering that is central to the inter-level project. In short, inter-level integration in cognitive science requires an understanding of the cognitive ability/system/function of interest, from which the process of decomposition and unification of distinct methodological approaches can begin. Such integration may be possible on each of these three forms of cognitive map theory, but they will each offer a distinct account of the cognitive ability being integrated.

These three reasons why one might be interested in expanding the cognitive map to nonspatial domains thus push us in different directions when it comes to how to extend the map. But beyond that, they also reveal a puzzle inherent in locating the cognitive map in memory. That is, to what extent is this information structure a proper part of memory? Given that alternative, nonspatial cognitive structures—such as those explored in section 11.2.2— can also be understood as accounts of memory structure, the need to defend the reliance on cognitive maps, however extended, becomes more critical.

11.5. Conclusion

What is a cognitive map? This concept is pervasive but hard to define comprehensively. At its core, a cognitive map represents the environment by taking advantage of the structure of space. As a metaphor, it offers a way of unifying our understanding of the representational structures endemic to cognition, giving a sense of shared function across a range of species and abilities. Insofar as cognitive maps remain an active research program, more attention should be devoted to the conceptual space between the literal and metaphorical versions.

The concept of a cognitive map is deeply intertwined with contemporary research on memory. In this chapter, we’ve surveyed a series of recent attempts to extend the concept to cover representations that are not obviously spatial. These projects, seen in the light of historical developments in the theory of spatial memory and cognitive structure, reveal both the ambitions and limitations of research into general-purpose representations.

O’Keefe and Nadel (1978) , in their subsequent work on cognitive maps in the hippocampus (discussed later in this chapter), find historical inspiration in the work of Gulliver (1908) .

Dabaghian, Brandt, and Frank (2014) argue that hippocampal maps represent topological (i.e., ordinal) features of space rather than geometric properties such as absolute distances and angles. We suspect this difference with Rescorla is at least partly terminological. Thus, we take point 1 to be satisfied by the model proposed by Dabaghian and colleagues.

We can see several potential ways to derive updating behaviors from Rescorla’s conditions. However, the same cannot be done for navigation, since it is clearly possible for a creature which does not behave at all, let alone navigate, to have a cognitive map on his definition.

Camp (2007) focuses more on these dynamic factors—on her view, a map is a representational system with a semi-compositional structure that determines what we can infer, how maps can be assembled, and how updating works.

Interestingly, path representations are not always symmetric—see Kuipers (1982) for a theoretical computer science perspective on how this asymmetry interacts with the “map” metaphor.

However, recent work in rodents (e.g., Stensola et al., 2015 ) has found a variety of cases where hexagonal symmetry in grid cells is distorted.

Thanks to Felipe De Brigard for this suggestion.

  • Aronov D., Nevers R., Tank D. W. Mapping of a non-spatial dimension by the hippocampal-entorhinal circuit. Nature. 2017; 543 :719–722. [ PMC free article : PMC5492514 ] [ PubMed : 28358077 ]
  • Aronowitz S. Memory is a modeling system. Mind and Language. 2018; 34 (4):483–502.
  • Baldassano C., Hasson U., Norman K. A. Representation of real-world event schemas during narrative perception. Journal of Neuroscience. 2018; 38 (45):9689–9699. [ PMC free article : PMC6222059 ] [ PubMed : 30249790 ]
  • Behrens T. E., Muller T. H., Whittington J. C., Mark S., Baram A. B., Stachenfeld K. L., Kurth-Nelson Z. What is a cognitive map? Organizing knowledge for flexible behavior. Neuron. 2018; 100 (2):490–509. [ PubMed : 30359611 ]
  • Bellmund J. L. S., Gardenfors P., Moser E. I., Doeller C. F. Navigating cognition: Spatial codes for human thinking. Science. 2018; 362 (6415):eaat6766. [ PubMed : 30409861 ]
  • Bermudez, J. L. (1998). The paradox of self-consciousness. Cambridge, MA: MIT Press.
  • Bermudez, J. L. (2014). Cognitive science: An introduction to the science of the mind (2nd ed.). Cambridge: Cambridge University Press.
  • Camp E. Thinking with maps. Philosophical Perspectives. 2007; 21 (1):145–182.
  • Camp, E. (2018). Why maps are not propositional. In A. Grzankowski & M. Montague (Eds.), Non-propositional intentionality (pp. 19–45). Oxford: Oxford University Press.
  • Carruthers P. Meta-cognition in animals: A skeptical look. Mind and Language. 2008; 23 (1):58–89.
  • Clark R. E., Squire L. R. Similarity in form and function of the hippocampus in rodents, monkeys, and humans. Proceedings of the National Academy of Sciences of the United States of America. 2013; 110 Supplement 2:10365–10370. [ PMC free article : PMC3690603 ] [ PubMed : 23754372 ]
  • Collins A. M., Loftus E. F. A spreading-activation theory of semantic processing. Psychological Review. 1975; 82 (6):407–428.
  • Constantinescu A. O., O’Reilly J. X., Behrens T. E. Organizing conceptual knowledge in humans with a gridlike code. Science. 2016; 352 (6292):1464–1468. [ PMC free article : PMC5248972 ] [ PubMed : 27313047 ]
  • Craver, C. F. (2007). Explaining the brain: Mechanisms and the mosaic unity of neuroscience. Oxford: Oxford University Press.
  • Dabaghian Y., Brandt V. L., Frank L. M. Reconceiving the hippocampal map as a topological template. eLife. 2014; 3 :e03476. [ PMC free article : PMC4161971 ] [ PubMed : 25141375 ]
  • Dechter R., Mateescu R. AND/OR search spaces for graphical models. Artificial Intelligence. 2007; 171 :73–106.
  • Doeller C. F., Barry C., Burgess N. Evidence for grid cells in a human memory network. Nature. 2010; 463 (7281):657. [ PMC free article : PMC3173857 ] [ PubMed : 20090680 ]
  • Dresler M., Shirer W. R., Konrad B. N., Müller N. C., Wagner I. C., Fernández G., et al. Greicius M. D. Mnemonic training reshapes brain networks to support superior memory. Neuron. 2017; 93 (5):1227–1235. [ PMC free article : PMC5439266 ] [ PubMed : 28279356 ]
  • Dubrow S., Rouhani N., Niv Y., Norman K. A. Does mental context drift or shift? Current Opinion in Behavioral Sciences. 2017; 17 :141–146. [ PMC free article : PMC5766042 ] [ PubMed : 29335678 ]
  • Eichenbaum H. Time cells in the hippocampus: A new dimension for mapping memories. Nature Reviews Neuroscience. 2014; 15 :732–744. [ PMC free article : PMC4348090 ] [ PubMed : 25269553 ]
  • Ericsson K. A. Exceptional memorizers: Made, not born. Trends in Cognitive Sciences. 2003; 7 :233–235. [ PubMed : 12804685 ]
  • Fodor, J. A. (1975). The language of thought. New York: Thomas Y. Crowell. Fodor, J. A. (1987). Psychosemantics. Cambridge, MA: MIT Press.
  • Glymour, C. N. (2001). The mind’s arrows: Bayes nets and graphical causal models in psychology. Cambridge, MA: MIT Press.
  • Goldberg, A. E. (2019). Explain me this: Creativity, competition, and the partial productivity of constructions. Princeton, NJ: Princeton University Press.
  • Gopnik A., Glymour C., Sobel D. M., Schulz L. E., Kushnir T., Danks D. A theory of causal learning in children: Causal maps and Bayes nets. Psychological Review. 2004; 111 (1):3. [ PubMed : 14756583 ]
  • Gulliver F. P. Orientation of maps. Bulletin of the American Geographical Society. 1908; 40 (9):538.
  • Hafting T., Fyhn M., Molden S., Moser M., Moser E. I. Microstructure of a spatial map in the entorhinal cortex. Nature. 2005; 436 (7052):801–806. [ PubMed : 15965463 ]
  • Hasselmo M. E. Grid cell mechanisms and function: Contributions of entorhinal persistent spiking and phase resetting. Hippocampus. 2008; 18 (12):1213–1229. [ PMC free article : PMC2614862 ] [ PubMed : 19021258 ]
  • Hasselmo, M. E. (2011). How we remember: Brain mechanisms of episodic memory. Cambridge, MA: MIT Press.
  • Høydal Ø A., Skytøen E. R., Moser M., Moser E. I. Object-vector coding in the medial entorhinal cortex. Nature. 2019; 568 (7752):400–404. [ PubMed : 30944479 ]
  • Johnson-Laird, P. N. (1983). Mental models: Towards a cognitive science of language, inference, and consciousness. Cambridge, MA: Harvard University Press.
  • Kerr N. H., Neisser U. Mental images of concealed objects: New evidence. Journal of Experimental Psychology: Learning, Memory, and Cognition. 1983; 9 (2):212. [ PubMed : 6222142 ]
  • Kuipers B. The “map in the head” metaphor. Environment and Behavior. 1982; 14 (2):202–220.
  • Kuipers, B. (2007). An intellectual history of the spatial semantic hierarchy. In M. E. Jefferies & W.-K. Yeap (Eds.), Robotics and cognitive approaches to spatial mapping (pp. 243–264). Berlin: Springer.
  • Lombrozo T. Simplicity and probability in causal explanation. Cognitive Psychology. 2007; 55 (3):232–257. [ PubMed : 17097080 ]
  • MacDonald C. J., Lepage K. Q., Eden U. T., Eichenbaum H. Hippocampal “time cells” bridge the gap in memory for discontiguous events. Neuron. 2011; 71 :737–749. [ PMC free article : PMC3163062 ] [ PubMed : 21867888 ]
  • Maguire E. A., Frackowiak R. S., Frith C. D. Recalling routes around London: Activation of the right hippocampus in taxi drivers. Journal of Neuroscience. 1997; 17 (18):7103–7110. [ PMC free article : PMC6573257 ] [ PubMed : 9278544 ]
  • Maguire E. A., Valentine E. R., Wilding J. M., Kapur N. Routes to remembering: The brains behind superior memory. Nature Neuroscience. 2003; 6 :90–95. [ PubMed : 12483214 ]
  • Mathis A., Stemmler M. B., Herz A. V. Probable nature of higher-dimensional symmetries underlying mammalian grid-cell activity patterns. Elife. 2015; 4 :e05979. [ PMC free article : PMC4454919 ] [ PubMed : 25910055 ]
  • Minchin, E. (2001). Homer and the resources of memory: Some applications of cognitive theory to the Iliad and the Odyssey. Oxford: Oxford University Press.
  • O’Keefe J. An allocentric spatial model for the hippocampal cognitive map. Hippocampus. 1991; 1 :230–235. [ PubMed : 1669295 ]
  • O’Keefe J., Dostrovsky J. The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely-moving rat. Brain Research. 1971; 34 (1):171–175. [ PubMed : 5124915 ]
  • O’Keefe, J., & Nadel, L. (1978). The hippocampus as a cognitive map. Oxford: Clarendon Press.
  • Pylyshyn, Z. W. (1984). Computation and cognition: Toward a foundation for cognitive science. Cambridge, MA: MIT Press.
  • Raz A., Packard M. G., Alexander G. M., Buhle J. T., Zhu H., Yu S., Peterson B. S. A slice of π: An exploratory neuroimaging study of digit encoding and retrieval in a superior memorist. Neurocase. 2009; 15 :361–372. [ PMC free article : PMC4323087 ] [ PubMed : 19585350 ]
  • Redish, A. D. (1999). Beyond the cognitive map: From place cells to episodic memory. Cambridge, MA: MIT Press.
  • Rescorla M. Cognitive maps and the language of thought. British Journal for the Philosophy of Science. 2009; 60 :377–407.
  • Rescorla, M. (2017). Maps in the head? In K. Andrews and J. Beck (Eds.), The Routledge handbook of philosophy of animal minds (pp. 34–45). London: Routledge.
  • Roediger H. L. Memory metaphors in cognitive psychology. Memory and Cognition. 1980; 8 (3):231–246. [ PubMed : 7392950 ]
  • Solstad T., Boccara C. N., Kropff E., Moser M., Moser E. I. Representation of geometric borders in the entorhinal cortex. Science. 2008; 322 (5909):1865–1868. [ PubMed : 19095945 ]
  • Stensola T., Stensola H., Moser M. B., Moser E. I. Shearing-induced asymmetry in entorhinal grid cells. Nature. 2015; 518 (7538):20. [ PubMed : 25673414 ]
  • Taube J. S., Muller R. U., Rank J. B. Head-direction cells recorded from the postsubiculum in freely moving rats. Journal of Neuroscience. 1990; 10 :420–435. [ PMC free article : PMC6570151 ] [ PubMed : 2303851 ]
  • Tavares R. M., Mendelsohn A., Grossman Y., Williams C. H., Shapiro M., Trope Y., Schiller D. A map for social navigation in the human brain. Neuron. 2015; 87 (1):231–243. [ PMC free article : PMC4662863 ] [ PubMed : 26139376 ]
  • Tolman E. C. Cognitive maps in rats and men. Psychological Review. 1948; 55 (4):189–208. [ PubMed : 18870876 ]
  • Wang A. Y., Thomas M. H. Looking for long-term mnemonic effects on serial recall: The legacy of Simonides. American Journal of Psychology. 2000; 113 :331–340. [ PubMed : 10997231 ]
  • Wilding, J. M., & Valentine, E. R. (1997). Superior memory. Hove, UK: Psychology Press.
  • Wood E. R., Dudchenko P. A., Eichenbaum H. The global record of memory in hippocampal neuronal activity. Nature. 1999; 397 :613–616. [ PubMed : 10050854 ]
  • Yoon J. S., Ericsson K. A., Donatelli D. Effects of 30 years of disuse on exceptional memory performance. Cognitive Science. 2018; 42 :884–903. [ PubMed : 29105154 ]
  • Zacks J. M., Swallow K. M. Event segmentation. Current Directions in Psychological Science. 2007; 16 :80–84. [ PMC free article : PMC3314399 ] [ PubMed : 22468032 ]
  • Zhao M. Human spatial representation: What we cannot learn from studies of rodent navigation. Journal of Neurophysiology. 2018; 120 :2453–2465. [ PubMed : 30133384 ]

Licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 4.0 Unported license. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

  • Cite this Page Robins S, Aronowitz S, Stolk A. 11 Memory Structure and Cognitive Maps. In: De Brigard F, Sinnott-Armstrong W, editors. Neuroscience and Philosophy. Cambridge (MA): MIT Press; 2022.
  • PDF version of this title (3.3M)
  • EPub version of this title

In this Page

  • Introduction
  • Foundational Work on Cognitive Structures
  • Cognitive Maps and Nonspatial Information
  • Why Extend the Cognitive Map?

Related information

  • PMC PubMed Central citations
  • PubMed Links to PubMed

Similar articles in PubMed

  • Memory and Space: Towards an Understanding of the Cognitive Map. [J Neurosci. 2015] Memory and Space: Towards an Understanding of the Cognitive Map. Schiller D, Eichenbaum H, Buffalo EA, Davachi L, Foster DJ, Leutgeb S, Ranganath C. J Neurosci. 2015 Oct 14; 35(41):13904-11.
  • Topological Schemas of Memory Spaces. [Front Comput Neurosci. 2018] Topological Schemas of Memory Spaces. Babichev A, Dabaghian YA. Front Comput Neurosci. 2018; 12:27. Epub 2018 Apr 24.
  • Rapid improvement of cognitive maps in the awake state. [Hippocampus. 2019] Rapid improvement of cognitive maps in the awake state. Craig M, Wolbers T, Strickland S, Achtzehn J, Dewar M. Hippocampus. 2019 Sep; 29(9):862-868. Epub 2019 Feb 18.
  • Review Reconciling neuronal representations of schema, abstract task structure, and categorization under cognitive maps in the entorhinal-hippocampal-frontal circuits. [Curr Opin Neurobiol. 2022] Review Reconciling neuronal representations of schema, abstract task structure, and categorization under cognitive maps in the entorhinal-hippocampal-frontal circuits. Igarashi KM, Lee JY, Jun H. Curr Opin Neurobiol. 2022 Dec; 77:102641. Epub 2022 Oct 8.
  • Review Sharp wave/ripple network oscillations and learning-associated hippocampal maps. [Philos Trans R Soc Lond B Biol...] Review Sharp wave/ripple network oscillations and learning-associated hippocampal maps. Csicsvari J, Dupret D. Philos Trans R Soc Lond B Biol Sci. 2014 Feb 5; 369(1635):20120528. Epub 2013 Dec 23.

Recent Activity

  • 11 Memory Structure and Cognitive Maps - Neuroscience and Philosophy 11 Memory Structure and Cognitive Maps - Neuroscience and Philosophy

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

What is Representation of Data Structure in Memory Known as?

DSA Problem Solving for Interviews using Java

Representation of data structure in memory is known as Abstract Data Type . Abstract data types can be a list , stack , or queue which will be used to represent different data structures in memory.

What is Abstract Data Type ?

An abstract data type is a concept that tells us about the operations and data on which the operations can be performed without showing its actual implementation. The abstract data type provides the concept of abstraction where complex implementation details are hidden from the users.

An abstract data type is a concept that is independent of the implementations, for example, A data structure may have different implementations in a different language.

Therefore by using ADT users can be benefited as the data structures may have different implementations but at least the concept, the features, and the operations are known.

Note : Abstract data type does not specify how data is stored in the memory area.

For Example :

If the laptop is an ADT and it can perform various operations like browsing , playing games , chatting , etc. but we don't know how these functions work in the backend means we don't know the implementation but knows the features.

We can think of abstract data type as a gaming console where we are allowed to play games without even bothering about how games are running which means the gaming console hides the inner structure and implementation from the user.

An ADT can be implemented in multiple ways, for example, a stack ADT can be implemented using both arrays and linked lists where the stack data structure is implemented using other data structures.

Types of ADT :

There are mainly three types of abstract data type(ADT) :

Let us discuss them.

A list is a linear data structure that is used to store a collection of " similar types of data" . The list ADT is an ordered collection of data which is stored sequentially inside the list and the data have a linear relationship with each other.

A list is a common real-world entity that can be implemented by using dynamic arrays or linked lists where the list has data and certain operations that can be performed on the data without the implementation details.

There are certain operations that are performed on the list which are as follows :

  • isEmpty() : To check whether list is empty or not.
  • isFull() : To check whether list is full or not.
  • traverse() : Visit every element stored in the list.
  • insert() : Insert element in the list.
  • delete() : Delete element from the list.
  • size() : Gives size of the list.

Stack is a linear data structure that allows the addition and deletion of elements in a particular order. Stack ADT follows LIFO(Last In First Out) or FILO(First In Last Out) order. It is an abstract data type with a bounded or predefined capacity.

Stack ADT behaves like a real-world stack, for example, a stack of books, a stack of plates, etc. which allows operations at one end only. Stack ADT can be implemented using arrays and linked list which has data like top pointer and elements to be inserted. Operations on stack ADT is carried out easily with the help of the top pointer.

structure-of-stack-adt

There are some common operations that are performed on the stack which are as follows :

  • push() : Inserting element at top of the stack.
  • pop() : Deleting element from the top of the stack.
  • isEmpty() : Checking whether stack is empty or not.
  • isFull() : Checking whether stack is full or not.
  • peek() : Fetch value from particular position of stack.
  • count() : Gives number of elements in the stack.
  • stackTop() : Gives top element of the stack.

To learn more in-depth concepts of stack data structure visit Stacks .

A queue is a linear data structure that allows the addition of elements from one end which is the rear end and the deletion of elements from another end which is the front end. The addition and deletion of elements are called Enqueue and Dequeue respectively.

Queue ADT follows FIFO(First In First Out) or LILO(Last In Last Out) order. Unlike stack ADT, a queue is a kind of abstract data type which allows operations from both ends. It is called a queue because it behaves like a queue in real life.

for example, a queue of bikes in traffic, a queue of persons standing for a movie ticket, etc. in which whoever comes first, will get the service first. A queue can be used where we want to fetch the elements in the order of insertion. Queue ADT can be implemented by using stacks , arrays , and linked lists .

structure-of-queue-adt

There are some common operations performed on the queue which are as follows :

  • enqueue() : Insert element in the queue.
  • dequeue() : Delete element from the queue.
  • isFull() : Check whether queue is full or not.
  • isEmpty() : Check whether queue is empty or not.
  • count() : Count elements in the queue.

Abstract Data Type Model

Abstract data type model contains an application program and an abstracted entity that has data structures and functions bound with each other. So basically this model explains that the application program can use the abstracted entity as an interface through which the program can use data structures.

Even if the application program changes then also it can use the abstract data type model as an interface to interact with the data structures.

An abstract data type is considered a combination of encapsulation and abstraction.

Let us understand a bit about encapsulation and abstraction :

Encapsulation : It is the process of binding data and data members or functions inside a single entity.

Abstraction : It is a concept in which functionality is shown but implementation details are hidden from the users.

the-abstract-data-type-model

So, the ADT model consists of two types of functions that is a private function and a public function which is bound with the data structures in a single unit(abstract data type) which is called encapsulation.

Then abstraction is performed as we can use the functionality of the data structures without bothering about the backend implementations.

Abstract Data Type with a Real-World Example

Let us take a real-world example to understand abstract data type in detail :

Suppose considering a Laptop that has specifications like :

  • 14 -inch IPS LCD screen , Windows 11 OS , 8 GB Ram , 256 GB SSD , HD Webcam .

Following are the functionalities it can perform :

  • browsing() , playGame() , videoConference() , webChating() .

Here, we have taken Laptop as an entity that has some specifications or data on which some functionalities can be performed.

Above is the real-life implementation of functionalities and specifications that can be performed on the Laptop. Laptops can be different but the functionalities performed by any laptop will be the same.

The implementation of code may be different due to the syntax of different programming languages but the logical view of the data structure will be the same across programming languages.

This shows that an ADT is a concept that is independent of implementations.

To learn more about stack data structures and their implementation visit Stacks .

  • Representation of data structure in memory is known as Abstract Data Type .
  • Abstract data type is a concept that tells about operations and data on which operations can be performed without knowing the implementation details.
  • Abstract data type model is a combination of encapsulation and abstraction .

Ready to Build Strong Foundations? Enroll Now in Our Data Structures and Algorithms Course for Coding Excellence!

  • Dev Concepts

Data Representation in Computer Memory [Dev Concepts #33]

Home » News » Dev Concepts » Data Representation in Computer Memory [Dev Concepts #33]

Dev-Concepts-Episode-33-Data-Representation-in-Computer-Memory

  • Author: Aleksandar Peev
  • March 31, 2022
  • No Comments
  • binary , binary integers , devconcepts , floating-point , integer range , mathematics , maths , programming , signed integers , software engineering , unicode

In this lesson, we will talk about storing data in the computer memory . By the end of this article, you will know how to work with binary representation of integers , floating-point numbers , text , and Unicode .

Integer numbers are represented in the computer memory, as a sequence of bits : 8-bits, 16-bits, 24-bits, 32-bits, 64-bits, and others, but always a multiple of 8 (one byte). They can be signed or unsigned and depending on this, hold a positive , or negative value . Some values in the real world can only be positive – the number of students enrolled in a class. There can be also negative values in the real world such as daily temperature.

Positive 8-bit integers  have a leading 0 , followed by 7 other bits. Their format matches the pattern “ 0XXXXXXX ” (positive sign + 7 significant bits). Their value is the decimal value of their significant bits (the last 7 bits).

Negative 8-bit integers have a leading one, followed by 7 other bits. Their format matches the pattern “ 1YYYYYYY ” (negative sign + 7 significant bits). Their value is -128 (which is minus 2 to the power of 7 ) plus the decimal value of their significant bits.

8-bit-binary-integer

Example of signed 8-bit binary integer

The table below summarizes the ranges of the integer data types in most popular programming languages , which follow the underlying number representations that we discussed in this lesson. Most programming languages also have 64-bit signed and unsigned integers , which behave just like the other integer types but have significantly larger ranges .

ranges-of-integer-data-types

  • The 8-bit signed integers have a range from -128 to 127 . This is the  sbyte  type in C# and the byte type in Java.
  • The 8-bit unsigned integers have a range from 0 to 255 . This is the  byte  type in C#.
  • The 16-bit signed integers have a range from -32768 to 32767 . This is the  short  type in Java, C#.
  • The 16-bit unsigned integers have a range from 0 to 65536 . This is the  ushort  type in C#.
  • The 32-bit signed integers have a range from -231 … 231-1 (which is from minus 2 billion to 2 billion roughly).  This is the  int  type in C#, Java, and most other languages. This  32-bit signed integer  data type is the most often used in computer programming. Most developers write “ int ” when they need just a number, without worrying about the range of its possible values because the range of “ int ” is large enough for most use cases.

Representing Text

Computers represent text characters as unsigned integer numbers, which means that letters are sequences of bits, just like numbers.

The ASCII standard represents text characters  as 8-bit integers. It is one of the oldest standards in the computer industry, which defines mappings between letters and unsigned integers. It simply  assigns a unique number for each letter  and thus allows  letters to be encoded as numbers .

representing-text

Representing Unicode Text

The  Unicode  standard represents more than  100,000  text characters as  16-bit integers . Unlike ASCII it uses  more bits per character  and therefore it can represent texts in many languages and alphabets, like Latin, Cyrillic, Arabic, Chinese, Greek, Korean, Japanese, and many others. 

Here are a few  examples  of Unicode characters:

representing-unicode-text

  • The Latin letter “ A ” has Unicode number 65 .
  • The Cyrillic letter “ sht”  has Unicode number  1097 .
  • The Arabic letter “ beh”  has Unicode number  1576 .
  • The “ guitar ” emoji symbol has Unicode number 127928 .

In any  programming language , we either  declare data type  before using a variable, or the language  automatically assigns a specific data type . In this lesson, we have learned how computers store  integer  numbers,  floating-point  numbers,  text , and other data. These concepts shouldn’t be taken lightly, and be careful with them!

Lesson Topics

Representation of Data

Representing Integers in Memory

Representation of Signed Integers

Largest and Smallest Signed Integers

Integers and Their Ranges in Programming

Representing Real Numbers

Storing Floating-Point Numbers

Representing Text and Unicode Text

Sequences of Characters

Lesson Slides

Leave a comment cancel reply.

You must be logged in to post a comment.

Recent Posts

SoftUni-Franchise-Partnership-Serbia

Case Study 2023: SoftUni Serbia [SoftUni Globe]

Shelly-Academy-Autumn-Semester

Shelly Academy: Autumn Semester [SoftUni Globe]

Franchise partnership: softuni serbia [softuni globe].

SoftUni-Allterco-Partnership-Thumbnail-Image

Empowering Home Automation: The Collaboration between SoftUni Global and Allterco [SoftUni Globe]

About softuni.

SoftUni provides high-quality education, profession and job to people who want to learn coding.

The SoftUni Global “Learn to Code” Community supports learners with free learning resources, mentorship and community help.

SoftUni Global is the international branch of SoftUni, the largest tech education provider in South-Eastern Europe. We empower the IT business through talent acquisition and development, educators through learning content and tools, and individuals through organized zero-to-career programs for developers.

  • Services for Business
  • Hire a Junior Developer
  • Train to Hire
  • Online Learning
  • On Site Learning
  • Technical Assessment
  • Build an Academy
  • Services for Educators
  • Educational Content
  • Educational Software
  • Educational Services
  • Course Catalog

Individuals

  • Learning Resources
  • Learn to Code Community
  • About SoftUni Global
  • Privacy Policy
  • SoftUni Fund
  • Code Lessons
  • Project Tutorials

HTML Sitemap

Close

CS3 Data Structures & Algorithms

Chapter 11 memory management.

Show Source |    | About    «   10. 10. Hashing Chapter Summary Exercises   ::   Contents   ::   11. 2. Dynamic Storage Allocation   »

11. 1. Chapter Introduction: Memory Management ¶

Most data structures are designed to store and access objects of uniform size. A typical example would be an integer stored in a list or a queue. Some applications require the ability to store variable-length records, such as a string of arbitrary length. One solution is to store in list or queue a bunch of pointers to strings, where each pointer is pointing to space of whatever size is necessary to hold that string. This is fine for data structures stored in main memory. But if the collection of strings is meant to be stored on disk, then we might need to worry about how exactly these strings are stored. And even when stored in main memory, something has to figure out where there are available bytes to hold the string. In a language like C++ or Java, programmers can allocate space as necessary (either explicitly with new or implicitly with a variable declaration). Where does this space come from? This section discusses memory management techniques for the general problem of handling space requests of variable size.

The basic model for memory management is that we have a (large) block of contiguous memory locations, which we will call the memory pool . Periodically, memory requests are issued for some amount of space in the pool. A memory manager has the job of finding a contiguous block of locations of at least the requested size from somewhere within the memory pool. Honoring such a request is called memory allocation . The memory manager will typically return some piece of information that the requestor can hold on to so that later it can recover the data that were just stored by the memory manager. This piece of information is called a handle . At some point, space that has been requested might no longer be needed, and this space can be returned to the memory manager so that it can be reused. This is called a memory deallocation . We can define an ADT for a simple memory manager for storing variable length arrays of integers as follows.

The user of the MemManager ADT provides a pointer (in parameter info ) to space that holds some message to be stored or retrieved. This is similar to the C++ basic file read/write methods. The fundamental idea is that the client gives messages to the memory manager for safe keeping. The memory manager returns a receipt for the message in the form of a MemHandle object. The client holds the MemHandle until it wishes to get the message back.

Method insert lets the client tell the memory manager the length and contents of the message to be stored. This ADT assumes that the memory manager will remember the length of the message associated with a given handle, thus method getRecord does not include a length parameter but instead returns the message actually stored. Method release allows the client to tell the memory manager to release the space that stores a given message.

When all inserts and releases follow a simple pattern, such as last requested, first released (stack order), or first requested, first released (queue order), memory management is fairly easy. We are concerned here with the general case where blocks of any size might be requested and released in any order. This is known as dynamic memory allocation . One example of dynamic memory allocation is managing free store for a compiler’s runtime environment, such as the system-level new and delete operations in C++. Another example is managing main memory in a multitasking operating system. Here, a program might require a certain amount of space, and the memory manager must keep track of which programs are using which parts of the main memory. Yet another example is the file manager for a disk drive. When a disk file is created, expanded, or deleted, the file manager must allocate or deallocate disk space.

A block of memory or disk space managed in this way is sometimes referred to as a heap . The term “heap” is being used here in a different way than the heap data structure typically used to implement a priority queue. Here “heap” refers to the memory controlled by a dynamic memory management scheme.

In the rest of this chapter, we first study techniques for dynamic memory management. We then tackle the issue of what to do when no single block of memory in the memory pool is large enough to honor a given request.

Contact Us | | Privacy | | License    «   10. 10. Hashing Chapter Summary Exercises   ::   Contents   ::   11. 2. Dynamic Storage Allocation   »

Contact Us | | Report a bug

  • Rest.li Architecture
  • Rest.li Server
  • Rest.li Client Framework
  • Unstructured Data
  • Asynchronous in Rest.li
  • Data Schemas
  • PDSC Syntax
  • Migrating from PDSC to PDL
  • Avro Translation
  • Java Binding
  • Schema Annotation Processor
  • Modeling Resources
  • Snapshots and Resource Compatibility Checking
  • How Data is Serialized for Transport

How Data is Represented in Memory

  • Main concepts
  • Projections
  • Attachment Streaming
  • Mutli-language
  • Compatibility Matrix
  • Scala Integration
  • Writing Unit Tests
  • Send Request Query in the Body
  • Use Projections
  • Avro Conversion
  • Compression
  • Migrate to Rest.li 2.x
  • Test Suite - Add New Language
  • Configure Service Errors in Java
  • Configure Max Batch Size in Java
  • Rest.li - FAQ
  • Test Suite - Troubleshooting

The Data Layer

The data schema layer, the data template layer.

There are three architectural layers that define how data is stored in-memory and provide the API’s used to access this data.

  • The first layer is the Data layer. This is the storage layer and is totally generic, for example, not schema aware.
  • The second layer is the Data Schema layer. This layer provides the in-memory representation of the data schema.
  • The third layer is the Data Template layer. This layer provides Java type-safe access to data stored by the Data layer.

At the conceptual level, the Data layer provides generic in-memory representations of JSON objects and arrays. A DataMap and a DataList provide the in-memory representation of a JSON object and a JSON array respectively. These DataMaps and DataLists are the primary in-memory data structures that store and manage data belonging to instances of complex schema types. This layer allows data to be serialized and de-serialized into in-memory representations without requiring the schema to be known. In fact, the Data layer is not aware of schemas and do not require a schema to access the underlying data.

The main motivations behind the Data layer are:

  • To allow generic access to the underlying data for building generic assembly and query engines. These engines need a generic data representation to data access. Furthermore, they may need to construct new instances from dynamically executed expressions, such as joins and projections. The schema of these instances depend on the expression executed, and could not be known in advance.
  • To facilitate schema evolution. The Data layer enables “use what you know and pass on what you don’t”. It allows new fields to be added and passed through intermediate nodes in the service graph without requiring these nodes to also have their schemas updated to include these new fields.
  • To permit some Java Virtual Machine service calls to be optimized by avoiding serialization.

Constraints

The Data layer implements the following constraints:

  • It permits only allowed types to be stored as values.
  • All non-container values (not DataMap and not DataList ) are immutable.
  • Null is not a value. The Data.NULL constant is used to represent null deserialized from or to be serialized to JSON. Avoiding null Java values reduces complexity by reducing the number of states a field may have. Without null values, a field can have two states, “absent” or “has valid value”. If null values are permitted, a field can have three states, “absent”, “has null value”, and “has valid value”.
  • The object graph is always acyclic. The object graph is the graph of objects connected by DataMaps and DataLists.
  • The key type for a DataMap is always java.lang.String .

Additional Features

The Data layer provides the following additional features (above and beyond what the Java library provides.)

  • A DataMap and DataList may be made read-only. Once it is read-only, mutations will no longer be allowed and will throw java.lang.UnsupportedOperationException . There is no way to revert a read-only instance to read-write.
  • Access instrumentation. See com.linkedin.data.Instrumentable for details.
  • Implements deep copy that should return a object graph that is isomorphic with the source, i.e. the copy will retain the directed acyclic graph structure of the source.

Allowed Value Types

  • java.lang.Integer
  • java.lang.Long
  • java.lang.Float
  • java.lang.Double
  • java.lang.Boolean
  • java.lang.String
  • com.linkedin.data.ByteString
  • com.linkedin.data.DataMap
  • com.linkedin.data.DataList

Note Enum types are not allowed because enum types are not generic and portable. Enum values are stored as a string.

DataComplex

Both DataMap and DataList implement the com.linkedin.data.DataComplex interface. This interface declares the methods that supports the additional features common to a DataMap and a DataList . These methods are:

Note: Details on CowCommon , CowMap , and CowList have been omitted or covered under DataComplex . Cow provides copy-on-write functionality. The semantics of CowMap and CowList is similar to HashMap and ArrayList .

The com.linkedin.data.DataMap class has the following characteristics:

  • DataMap implements java.util.Map<String, Object> .
  • Its entrySet() , keySet() , and values() methods return unmodifiable set and collection views.
  • Its clone() and copy() methods returns a DataMap .

The com.linkedin.data.DataList class has the following characteristics.

  • DataList implements java.util.List<Object> .
  • Its clone() and copy() method return a DataList .

The Data Schema layer provides the in-memory representation of the data schema. The Data Schema Layer provides the following main features:

  • Parse a JSON encoded schema into in-memory representation using classes in this layer
  • Validate a Data object against a schema

Their common base class for Data Schema classes is com.linkedin.data.schema.DataSchema . It defines the following methods:

The following table shows the mapping of schema types to Data Schema classes.

Data to Schema Validation

The ValidateDataAgainstSchema class provides methods for validating Data layer instances with a Data Schema. The ValidationOption class is used to specify how validation should be performed and how to fix-up the input Data layer objects to conform to the schema. There are two independently configuration options:

  • RequiredMode option indicates how required fields should be handled during validation.
  • CoercionMode option indicates how to coerce Data layer objects to the Java type corresponding to their schema type.

Example Usage:

RequiredMode

The available RequiredModes are:

  • IGNORE Required fields may be absent. Do not indicate a validation error if a required field is absent.
  • MUST_BE_PRESENT If a required field is absent, then validation fails. Validation will fail even if the required field has been declared with a default value.
  • CAN_BE_ABSENT_IF_HAS_DEFAULT If a required field is absent and the field has not been declared with a default value, then validation fails. Validation will not attempt to modify the field to provide it with the default value.
  • FIXUP_ABSENT_WITH_DEFAULT If a required field is absent and it cannot be fixed-up with a default value, then validation fails. This mode will attempt to modify an absent field to provide it with the field’s default value. If the field does not have a default value, validation fails. If the field has a default value, validation will attempt to set the field’s value to the default value. This attempt may fail if fixup is not enabled or the DataMap containing the field cannot be modified because it is read-only. The provided default value will be read-only.

CoercionMode

Since JSON does not have or encode enough information on the actual types of primitives, and schema types like bytes and fixed are not represented by native types in JSON, the initial de-serialized in-memory representation of instances of these types may not be the actual type specified in the schema. For example, when de-serializing the number 52, it will be de-serialized into an Integer even though the schema type may be a Long . This is because a schema is not required to serialize or de-serialize.

When the data is accessed via schema aware language binding like the Java binding, the conversion/coercion can occur at the language binding layer. In cases when the language binding is not used, it may be desirable to fix-up a Data layer object by coercing it the Java type corresponding to the object’s schema. For example, the appropriate Java type the above example would be a Long . Another fix-up would be to fixup Avro-specified string encoding of binary data (bytes or fixed) into a ByteString . In another case, it may be desirable to coerce the string representation of a value to the Java type corresponding to the object’s schema. For example, coerce “65” to 65, the integer, if the schema type is “int”.

Whether an how coercion is performed is specified by CoercionMode . The available CoercionModes are:

  • OFF No coercion is performed.
  • NORMAL Numeric types are coerced to the schema’s corresponding Java numeric type. Avro-encoded binary strings are coerced to ByteString if the schema type is bytes or fixed.
  • STRING_TO_PRIMITIVE Includes all the coercions performed by NORMAL . In addition, also coerces string representations of numbers to the schema’s corresponding numeric type, and string representation of booleans (“true” or “false” case-insenstive) to Boolean .

NORMAL Coercion Mode

The following table provides additional details on the NORMAL validation and coercion mode.

(1) Even though Number type is allowed and used for fixing up to the desired type, the Data layer only allows Integer , Long , Float , and Double values to be held in a DataMap or DataList . (2) No fix-up is performed. (3) the String must be a valid encoding of binary data as specified by the Avro specification for encoding bytes into a JSON string.

STRING_TO_PRIMITIVE Coercion Mode

This mode includes allowed input types and associated validation and coercion’s of NORMAL . In addition, it allows the following additional input types and performs the following coercions on these additional allowed input types.

ValidationResult

The result of validation is returned through an instance of the ValidationResult class. This class has the following methods:

Note: Schema validation and coercion are currently explicit operations. They are not implicitly performed when data are de-serialized as part of remote invocations.

The Data Template layer provides Java type-safe access to the underlying data stored in the Data layer. It has explicit knowledge of the schema of the data stored. The code generator generates classes for complex schema types that derive from base classes in this layer. The common base of these generated is com.linkedin.data.DataTemplate . Typically, a DataTemplate instance is an overlay or wrapper for a DataMap or DataList instance. It allows type-safe access to the underlying data in the DataMap or DataList . (The exception is the FixedTemplate which is a subclass of DataTemplate for fixed schema types.)

The Data Template layer provides the following abstract base classes that are used to construct Java bindings for different complex schema types.

The unwrapped schema types are:

The wrapped schema types are types whose Java type-safe bindings are not the same as their data type in the Data layer. These types require a DataTemplate wrapper to provide type-safe access to the underlying data managed by the Data layer. The wrapped types are:

  • record and error

Enum is an unwrapped type even though its Java type-safe binding is not the same as its storage type in the Data layer. This is because enum conversions are done through coercing to and from java.lang.String  s implemented by the Data Template layer. This is similar to coercing between different numeric types also implemented by the Data Template layer.

The following table shows the relationships among types defined in the data schema, types stored and managed by the Data layer, and the types of the Java binding in the Data Template layer.

(1) When a JSON object is deserialized, the actual schema type is not known. Typically, the smallest sized type that can represent the deserialized value will be used to store the value in-memory. (2) Depending on the method, un-boxed types will be preferred to boxed types if applicable and the input or output arguments can never be null. (3) When a JSON object is deserialized, the actual schema type is not known for bytes and fixed. Values of bytes and fixed types are stored as strings as serialized representation is a string. However, ByteString is an equally valid Java type for these schema types.

abstract data type

storage structure

file structure

Posted under Data Types and Abstraction Data Structures and Algorithms

Engage with the Community - Add Your Comment

Confused About the Answer? Ask for Details Here.

Know the Explanation? Add it Here.

Q. Representation of data structure in memory is known as:

Similar questions, discover related mcqs.

Q. An ADT is defined to be a mathematical model of a user-defined type along with the collection of all ____________ operations on that model.

View solution

Q. Theoretical computer science refers to the collection of such topics that focus on the__________, as well as mathematical aspects of computing.

Q. Data type is the classification of pieces of information in a____________.

Q. A dynamic data structure is one in which the memory for elements is allocated dynamically at runtime. Is this statement True or False?

Q. Maintaining an efficient communication between programmers is job done by

Q. A structure for a design solution is described by

Q. Generalization of important design concepts for a recurring problem is done through a

Q. To deal with relationship between a collection of actions and a hierarchy of object types, approaches are of

Q. Techniques that are used to combine various software components are referred to as

Q. Identical nodes of a PR quadtree can be implemented by flyweight for

Q. Implementation of identical nodes of a PR quadtree can be done through pattern

Q. For implementation of PR quadtree data structure, design pattern to use is

Suggested Topics

Are you eager to expand your knowledge beyond Data Structures and Algorithms? We've curated a selection of related categories that you might find intriguing.

Click on the categories below to discover a wealth of MCQs and enrich your understanding of Computer Science. Happy exploring!

the representation of data structure in memory is known as

R Programming

Unleash the power of statistical computing with our R Programming MCQs. Topics cover...

the representation of data structure in memory is known as

Venture into server-side scripting with our PHP MCQs. Cover everything from syntax...

the representation of data structure in memory is known as

Get a firm grasp on database systems with our DBMS MCQs. Explore relational...

the representation of data structure in memory is known as

Master the suite of productivity tools with our MS Office MCQs. From Word and Excel...

the representation of data structure in memory is known as

Polish your web design skills with our CSS MCQs. Learn about selectors, properties,...

the representation of data structure in memory is known as

Web Technologies

Master the building blocks of the web with our Web Technologies MCQs. From HTML and...

the representation of data structure in memory is known as

Learn the leading NoSQL database with our MongoDB MCQs. Understand everything from...

the representation of data structure in memory is known as

Embedded Systems

Dive into the world of specialized computing systems with our Embedded Systems MCQs....

the representation of data structure in memory is known as

Cyber Security

Understand the fundamentals of safeguarding digital assets with our Cyber Security...

Northwestern Scholars Logo

  • Help & FAQ

The Representation of Knowledge in Memory

Research output : Chapter in Book/Report/Conference proceeding › Chapter

While originating from the senses, knowledge is not a blind record of sensory inputs. Normal people are not tape recorders, or video recorders; rather, they seem to process and reprocess information, imposing on it and producing from it knowledge which has structure. Schemata are data structures for representing the generic concepts stored in memory. They exist for generalized concepts underlying objects, situations, events, sequences of events, actions, and sequences of actions. Just as certain characteristics of the actors are specified by the play-write, so too a schema contains, as part of its specification, information about the types of objects that may be bound to the various variables of the schema. In much the same way as the entries for lexical items in a dictionary consist of other lexical items, so the structure of a schema is given in terms of relationships among other schemata.

ASJC Scopus subject areas

  • General Psychology
  • General Social Sciences

Other files and links

  • Link to publication in Scopus
  • Link to the citations in Scopus

Fingerprint

  • Event Mathematics 100%
  • Lexical Item Mathematics 100%
  • information INIS 100%
  • Bounds Mathematics 50%
  • Variables Mathematics 50%
  • Characteristics Mathematics 50%
  • Data Structure Mathematics 50%
  • Dictionary Computer Science 50%

T1 - The Representation of Knowledge in Memory

AU - Rumelhart, David E.

AU - Ortony, Andrew

N1 - Publisher Copyright: © 1977 by Lawrence Erlbaum Associates, Inc. All rights reserved.

PY - 2017/1/1

Y1 - 2017/1/1

N2 - While originating from the senses, knowledge is not a blind record of sensory inputs. Normal people are not tape recorders, or video recorders; rather, they seem to process and reprocess information, imposing on it and producing from it knowledge which has structure. Schemata are data structures for representing the generic concepts stored in memory. They exist for generalized concepts underlying objects, situations, events, sequences of events, actions, and sequences of actions. Just as certain characteristics of the actors are specified by the play-write, so too a schema contains, as part of its specification, information about the types of objects that may be bound to the various variables of the schema. In much the same way as the entries for lexical items in a dictionary consist of other lexical items, so the structure of a schema is given in terms of relationships among other schemata.

AB - While originating from the senses, knowledge is not a blind record of sensory inputs. Normal people are not tape recorders, or video recorders; rather, they seem to process and reprocess information, imposing on it and producing from it knowledge which has structure. Schemata are data structures for representing the generic concepts stored in memory. They exist for generalized concepts underlying objects, situations, events, sequences of events, actions, and sequences of actions. Just as certain characteristics of the actors are specified by the play-write, so too a schema contains, as part of its specification, information about the types of objects that may be bound to the various variables of the schema. In much the same way as the entries for lexical items in a dictionary consist of other lexical items, so the structure of a schema is given in terms of relationships among other schemata.

UR - http://www.scopus.com/inward/record.url?scp=85130917804&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85130917804&partnerID=8YFLogxK

M3 - Chapter

AN - SCOPUS:85130917804

SN - 9781138280410

BT - Schooling and the Acquisition of Knowledge

PB - Taylor and Francis

  • Data Structures
  • Linked List
  • Binary Tree
  • Binary Search Tree
  • Segment Tree
  • Disjoint Set Union
  • Fenwick Tree
  • Red-Black Tree
  • Advanced Data Structures

Array Representation in Data Structures

  • Data Structures | Array | Question 2
  • Data Structures | Array | Question 1
  • Why Array Data Structures is needed?
  • Data Structures | Misc | Question 7
  • Data Structures | Misc | Question 1
  • Data Structures | Misc | Question 4
  • Data Structures | Misc | Question 8
  • Array of Structures in C
  • Data Structures | Misc | Question 10
  • Array Data Structure
  • Data Structures | Linked List | Question 2
  • Introduction to Data Structures
  • Is array a Data Type or Data Structure?
  • Data Structures | Stack | Question 1
  • Data Structures | Stack | Question 2
  • Commonly Asked Data Structure Interview Questions
  • Data Structures in R Programming
  • Batch Script - Creating Structures in Arrays
  • Python Data Structures

Representation of Array

The representation of an array can be defined by its declaration. A declaration means allocating memory for an array of a given size.

Array

Arrays can be declared in various ways in different languages. For better illustration, below are some language-specific array declarations.

the representation of data structure in memory is known as

Array declaration

However, the above declaration is static or compile-time memory allocation, which means that the array element’s memory is allocated when a program is compiled.

Here only a fixed size (i,e. the size that is mentioned in square brackets [] ) of memory will be allocated for storage, but don’t you think it will not be the same situation as we know the size of the array every time, there might be a case where we don’t know the size of the array. If we declare a larger size and store a lesser number of elements will result in a waste of memory or either be a case where we declare a lesser size then we won’t get enough memory to store the rest of the elements. In such cases, static memory allocation is not preferred.

Please Login to comment...

Similar reads, improve your coding skills with practice.

 alt=

What kind of Experience do you want to share?

ScienceDaily

Dynamic DNA structures and the formation of memory

An international collaborative research team, including scientists from UQ's Queensland Brain Institute (QBI), has discovered a novel mechanism underlying memory involving rapid changes in a specific DNA structure.

The team found that G-quadraplex DNA (G4-DNA) accumulates in neurons and dynamically controls the activation and repression of genes underlying long-term memory formation.

In addition, using advanced CRISPR-based gene editing technology, the team revealed the causal mechanism underlying the regulation of G4-DNA in the brain, which involves site-directed deposition of the DNA helicase, DHX36.

The new study, published in the Journal of Neuroscience , provides the first evidence that G4-DNA is present in neurons and functionally involved in the expression of different memory states.

The study, led by Dr Paul Marshall at the Australian National University and QBI and a team of collaborators from Linköping University, Weizmann Institute of Science, and the University of California Irvine, highlights the role that dynamic DNA structures play in memory consolidation.

DNA flexibility

For decades, many scientists considered the topic of DNA to be solved. DNA is widely recognised as a right-handed double helix, with changes to this structure only occurring during DNA replication and transcription. This structure contains two strands of nucleic acid featuring four bases: adenine (A) and thymine (T), guanine (G) and cytosine (C), which pair together to form rungs of the DNA ladder.

We now know that this is not the complete story. QBI's Professor Tim Bredy explains that DNA can assume a variety of conformational states that are functionally important for cellular processes.

"DNA topology is much more dynamic than the static, right-hand double helix, as presumed by most researchers in the field," said Professor Bredy. "There are actually more than 20 different DNA structure states identified to date, each potentially serving a different role in the regulation of gene expression."

In the new study, the team has now shown that a significant proportion of these structures are causally involved in the regulation of activity-dependent gene expression and required the formation of memory.

Although epigenetic modifications have a well-established association with neuronal plasticity and memory, to date, little is known about how local changes in DNA structure affect gene expression.

G4-DNA accumulates in cells when guanines fold into a stable four-stranded DNA structure. While there is evidence for the role that this structure plays in regulating transcription, prior to this study, its involvement in experience-dependent gene expression had not been explored.

G4-DNA regulates memory

G4-DNA transiently accumulates in active neurons during learning. The formation of this quadraplex structure takes place over milliseconds or minutes, at the same rate of neuronal transcription in response to an experience.

The G4-DNA structure can therefore be involved in both the enhanced and impairment of transcription in active neurons, based on their activity, to enable different memory states.

This mechanism highlights how DNA dynamically responds to experience and suggest that it has the capacity to store information not just in its code or epigenetically, but structurally too.

Extinguishing fear memories

The extinction of conditioned fear is a behavioural adaptation that is critical for survival. Fear extinction relies on forming new long-term memories with similar environmental elements, to compete with and take over the fear-related memory.

The formation of long-lasting extinction memories depends on coordinated changes in gene expression.

Professor Bredy said it is now evident that activity-induced gene expression underlying extinction is a tightly coordinated process.

"This process is dependent on temporal interactions between the transcriptional machinery and a variety of DNA structures, including G4-DNA, rather than being determined solely by DNA sequence or DNA modification as so often has been presumed.

"This discovery extends our understanding of how DNA functions as a highly dynamic transcriptional control device in learning and memory."

  • Intelligence
  • Brain Injury
  • Neuroscience
  • Educational Psychology
  • Learning Disorders
  • Collaboration
  • Alzheimer's disease
  • Genetic code
  • Limbic system
  • Excitotoxicity and cell damage
  • Sympathetic nervous system

Story Source:

Materials provided by University of Queensland . Note: Content may be edited for style and length.

Journal Reference :

  • Paul R. Marshall, Joshua Davies, Qiongyi Zhao, Wei-Siang Liau, Yujin Lee, Dean Basic, Ambika Periyakaruppiah, Esmi L. Zajaczkowski, Laura J. Leighton, Sachithrani U. Madugalle, Mason Musgrove, Marcin Kielar, Arie Maeve Brueckner, Hao Gong, Haobin Ren, Alexander Walsh, Lech Kaczmarczyk, Walker S. Jackson, Alon Chen, Robert C. Spitale, Timothy W. Bredy. DNA G-Quadruplex Is a Transcriptional Control Device That Regulates Memory . The Journal of Neuroscience , 2024; 44 (15): e0093232024 DOI: 10.1523/JNEUROSCI.0093-23.2024

Cite This Page :

Explore More

  • Fastest Rate of CO2 Rise Over Last 50,000 Years
  • Like Dad and Like Mum...all in One Plant
  • What Makes a Memory? Did Your Brain Work Hard?
  • Plant Virus Treatment for Metastatic Cancers
  • Controlling Shape-Shifting Soft Robots
  • Brain Flexibility for a Complex World
  • ONe Nova to Rule Them All
  • AI Systems Are Skilled at Manipulating Humans
  • Planet Glows With Molten Lava
  • A Fragment of Human Brain, Mapped

Trending Topics

Strange & offbeat.

IMAGES

  1. Two types Computer Memory

    the representation of data structure in memory is known as

  2. In-memory data structure and the execution flow of graph.

    the representation of data structure in memory is known as

  3. What Representation of Data Structure in Memory Known as?

    the representation of data structure in memory is known as

  4. Arrays in Data Structure: A Guide With Examples [Updated]

    the representation of data structure in memory is known as

  5. What is an In-Memory Database? Definition and FAQs

    the representation of data structure in memory is known as

  6. Types of Computer Memory

    the representation of data structure in memory is known as

VIDEO

  1. REPRESENTATION OF TREE IN MEMORY

  2. COMPUTER SCIENCE _ II PUC _ CH 04 _ DATA STRUCTURE

  3. Data Structure

  4. Chapter 10: Main Memory (DRAM devices, arrays and architectures) -- Part I

  5. Data Structures

  6. Memory Representation of linked list

COMMENTS

  1. PDF Chapter 3 Data Representation and Linear Structures

    Data Representation and Linear Structures We begin the study of data structure with data representation, i.e., different ways to store data in computer memory. In this chapter, we will study how to represent data with linear struc-ture. A data type is a set of values. For example, 1. Boolean = {true, false}. 2. integer = {0,±1,±2,···}. 3.

  2. 11 Memory Structure and Cognitive Maps

    Cognitive maps, in the most literal sense, are mental representations that are structured in a way that reflects the features of real space and which aid in navigation. Grounding the structure of memory systems in this basic and general ability that is conserved across a wide range of species has obvious appeal.

  3. What is Representation of Data Structure in Memory Known as?

    Conclusion. Representation of data structure in memory is known as Abstract Data Type. Abstract data type is a concept that tells about operations and data on which operations can be performed without knowing the implementation details. Abstract data type model is a combination of encapsulation and abstraction.

  4. Introduction to Data Structures

    Data structure modification is easy. It requires less time. Save storage memory space. Data representation is easy. Easy access to the large database; Classification/Types of Data Structures: Linear Data Structure Non-Linear Data Structure. Linear Data Structure: Elements are arranged in one dimension ,also known as linear dimension.

  5. PDF Data Representation in Memory

    Digital transistors operate in high and low voltage ranges. Voltage Range dictates Binary Value on wire. high voltage range (e.g. 2.8V to 3.3V) is a logic 1. low voltage range (e.g. 0.0V to 0.5V) is a logic 0. voltages in between are indefinite values. Ternary or quaternary systems have practicality problems.

  6. Data structure

    Data structures are generally based on the ability of a computer to fetch and store data at any place in its memory, specified ... are some popular types of trees. They enable efficient and optimal searching, sorting, and hierarchical representation of data. A trie, also known as a prefix tree, is a specialized tree data structure used for the ...

  7. PDF Data Representation

    Data Representation Data Representation Eric Roberts CS 106A February 10, 2016 ... Shannon is best known as the inventor of ... long career. Claude Shannon (1916-2001) The Structure of Memory • The fundamental unit of memory inside a computer is called a bit, which is a contraction of the words binary digit. A bit can

  8. PDF Data Representation Synthesis

    depicted in Figure 1. In our approach, a data structure client writes code that describes and manipulates data at a high-level as rela-tions; a data structure designer then provides decompositions which describe how those relations should be represented in memory as a combination of primitive data structures. Our compiler RELC takes

  9. Data Representation in Computer Memory [Dev Concepts #33]

    In this lesson, we will talk about storing data in the computer memory. By the end of this article, you will know how to work with binary representation of integers, floating-point numbers, text, and Unicode. Integer numbers are represented in the computer memory, as a sequence of bits: 8-bits, 16-bits, 24-bits, 32-bits, 64-bits, and others ...

  10. PDF 11.3 MEMORY MANAGEMENT DATA STRUCTURES 255

    11.3 MEMORY MANAGEMENT DATA STRUCTURES 259} u;}; The size member of the Bhdr structure gives the size of the block in bytes. The magic member of the structure is used to record the current status of this block. It may take on one of the following values: enum {MAGIC_A= #a110c, /∗ Allocated block ∗/ MAGIC_F= #badc0c0a, /∗ Free block ∗/

  11. Array (data structure)

    In computer science, an array is a data structure consisting of a collection of elements (values or variables), of same memory size, each identified by at least one array index or key.An array is stored such that the position of each element can be computed from its index tuple by a mathematical formula. The simplest type of data structure is a linear array, also called one-dimensional array.

  12. Data Structures: MCQ Set

    Q53: Representation of data structure in memory is known as (A) Recursive (B) Abstract data type (C) Storage structure (D) File structure; Q54: The value of the following expression (13/4*3)%5+1 is (A) 5.75 (B) 2.95 (C) 1.4875 (D) 0.5; Q55: Breadth first search uses __ as an auxiliary structure to hold nodes for future processing

  13. Understanding Data Structures

    Finding an Array Element in Memory. There is a simple formula to find the rest of the elements, should we need to call upon them: memoryAddressOfElement = startingAddress + (index * bitAllocation ...

  14. Data Structure Types, Classifications and Applications

    Save storage memory space. Data representation is easy. ... Although these are the most widely known and used data structures, there are some other forms of data structures as well which are used in Computer Science, such as policy-based data structures, etc. But no matter which data structure you choose, each one has its perks and ...

  15. 11.1. Chapter Introduction: Memory Management

    Dynamic Storage Allocation ». 11. 1. Chapter Introduction: Memory Management ¶. Most data structures are designed to store and access objects of uniform size. A typical example would be an integer stored in a list or a queue. Some applications require the ability to store variable-length records, such as a string of arbitrary length.

  16. Memory Representation

    Although the in-memory representation of each identity is kept in compact data structure for efficiency, the structure can be mapped directly to its XML archival structure. The in-memory structure is based on the Compressed Tag Format (CTF) of the Compressed Document Set Architecture (CoDoSA) framework for managing unstructured data (Talburt ...

  17. How Data is Represented in Memory

    These DataMaps and DataLists are the primary in-memory data structures that store and manage data belonging to instances of complex schema types. This layer allows data to be serialized and de-serialized into in-memory representations without requiring the schema to be known. In fact, the Data layer is not aware of schemas and do not require a ...

  18. How Is Data Structure Stored in-Memory?

    Primitive data structures, such as integers and characters, are typically stored directly in memory. For example, an integer variable occupies a fixed amount of memory determined by the data type (e.g., 4 bytes for a 32-bit integer). When a primitive data structure is declared, memory is allocated to store its value.

  19. representation of data structures

    8 June 2021 techalmirah. The logical or mathematical model of a particular organization of data is called a data structure. The data structure is a way of storing and accessing the data into an acceptable form for computers. So that a large number of data is processed in a small interval of time. In a simple way, we say that storing the data in ...

  20. Representation of data structure in memory is known as:

    Data Structures and Algorithms. Representation of data structure in memory is known as: recursive abstract data type storage structure file structure.

  21. The Representation of Knowledge in Memory

    Abstract. While originating from the senses, knowledge is not a blind record of sensory inputs. Normal people are not tape recorders, or video recorders; rather, they seem to process and reprocess information, imposing on it and producing from it knowledge which has structure. Schemata are data structures for representing the generic concepts ...

  22. Array Representation in Data Structures

    Representation of Array. The representation of an array can be defined by its declaration. A declaration means allocating memory for an array of a given size. Arrays can be declared in various ways in different languages. For better illustration, below are some language-specific array declarations. However, the above declaration is static or ...

  23. Memory Representation of Graph: Types of Representation and ...

    A data structure known as a graph is utilized in computer science to depict intricate relationships among objects. ... we explored the two basic memory representations of graphs, adjacency matrix ...

  24. Dynamic DNA structures and the formation of memory

    Although epigenetic modifications have a well-established association with neuronal plasticity and memory, to date, little is known about how local changes in DNA structure affect gene expression ...