• Skip to main content
  • Skip to primary sidebar
  • Skip to footer

Additional menu

Super Fast Python Banner

Super Fast Python

making you awesome at concurrency

Thread Atomic Operations in Python

March 13, 2022 by Jason Brownlee in Python Threading

Last Updated on September 12, 2022

Operations like assignment and adding values to a list or a dict in Python are atomic.

In this tutorial you will discover thread atomic operations in Python .

Let’s get started.

Table of Contents

Atomic Operations

An atomic operation is one or a sequence of code instructions that are completed without interruption.

A program may be interrupted for one of many reasons. In concurrent programming, a program may be interrupted via a context switch.

You may recall that the operating system controls what threads execute and when. A context switch refers to the operating system pausing the execution of a thread and storing its state, while unpausing another thread and restoring its state.

A thread cannot be context switched in the middle of an atomic operation.

This means these operations are thread-safe as we can expect them to be completed once started.

Now that we know what an atomic operation is, let’s look at some examples in Python.

Run loops using all CPUs, download your FREE book to learn how.

Atomic Operations in Python

A number of operations in Python are atomic.

Under the covers, the Python interpreter that runs your program executes Python bytecodes in a virtual machine, called the Python Virtual Machine (PVM). These are a lower-level set of instructions and provide the basis for both context switching between threads and atomic operations.

Specifically, a Python program will be context switched at the level of Python bytecodes. A Python program will also be atomic at the level of Python bytecodes.

In general, Python offers to switch among threads only between bytecode instructions — What kinds of global value mutation are thread-safe?, Library and Extension FAQ

Nevertheless, a number of standard Python operations are atomic at both the Python code and bytecode level. This means that the operations are thread-safe under the reference Python interpreter (CPython) at the time of writing

The Python FAQ provides a useful list of these operations.

Let’s review some of these atomic operations in Python.

Atomic Assignment

Assigning a value to a variable is atomic.

For example:

Assigning a value to an object property is atomic.

Atomic Lists Operations

Many operations on lists are atomic.

Adding a value to a list is atomic.

Adding one list to the end of another list is atomic.

Retrieving a value from a list is atomic by de-referencing its index.

Removing a value from a list is atomic.

Assigning a slice of the list is atomic.

Sorting a list is atomic.

Atomic Dict Operations

Some operations on a dictionary are atomic.

Assigning a value to a key on the dict is atomic.

Combining one dict into another is atomic.

Retrieving the keys from a dict is atomic.

Now that we are familiar with atomic operations in Python, let’s look at some operations that are not atomic.

Download Now: Free Threading PDF Cheat Sheet

Non-Atomic Operations in Python

Most operations are not atomic in Python.

This means that these operations are not thread-safe.

In this section we will discuss a few non-atomic operations that when used in concurrent programs can lead to a concurrent failure condition or bug called a race condition.

Adding and Subtracting a Variable

Adding or subtracting a value from a variable, such as an integer variable, is not atomic.

The reason for this is at least three operations are involved, they are:

  • Read the value of the variable.
  • Calculate the new value for the variable
  • Assign the calculated value of the variable.

Access and Assign

Combining the access and assignment of a value in a list or dict and assignment is not atomic.

Now that we know some examples of operations that are not atomic, let’s consider some recommendations regarding atomic operations.

Free Python Threading Course

Download your FREE threading PDF cheat sheet and get BONUS access to my free 7-day crash course on the threading API.

Discover how to use the Python threading module including how to create and start new threads and how to use a mutex locks and semaphores

Learn more  

Recommendations Regarding Atomic Operations

Although some operations are atomic in Python, we should never rely on an operation being atomic.

There are a number of good reasons for this, such as:

  • The operations may not be atomic if the code is executed by a different Python interpreter.
  • The behavior of the reference Python interpreter may change in the future.
  • Other programmers who have to read your code may not be as intimately familiar with Python atomic operations.
  • You may introduce more complex race conditions, e.g. operations are atomic but multiple such operations in a batch are not protected.

As such, you should not rely on the built-in atomic operations listed above. In most cases, you should act as they are not available.

A similar stance is recommended in the Google Python style guide.

Do not rely on the atomicity of built-in types. While Python’s built-in data types such as dictionaries appear to have atomic operations, there are corner cases where they aren’t atomic (e.g. if __hash__ or __eq__ are implemented as Python methods) and their atomicity should not be relied upon. Neither should you rely on atomic variable assignment (since this in turn depends on dictionaries). Use the Queue module’s Queue data type as the preferred way to communicate data between threads. Otherwise, use the threading module and its locking primitives. Prefer condition variables and threading.Condition instead of using lower-level locks. — Section 2.18 Threading, Google Python Style Guide

This is excellent advice.

So, if we should not rely on built-in atomic operations in Python, what should we do instead?

Next, let’s look at the alternative.

Overwhelmed by the python concurrency APIs? Find relief, download my FREE Python Concurrency Mind Maps

Make Operations Atomic

When atomic operations are required, use a lock.

In concurrency programming, it is common to have critical sections of code that may be executed by multiple threads simultaneously which must be protected.

These sections can be protected using locks such as the mutual exclusion lock (mutex) provided in the threading.Lock class.

If you are new to the threading.Lock class, you can learn more here:

  • How to Use a Mutex Lock in Python

This class allows you to define arbitrary blocks of code, from one line, to entire functions of code that can be treated as an atomic block.

For example, the context manager for the threading.Lock can be used:

Using the lock to protect a block of code does not prevent the thread from being context switched in the middle of an instruction or between instructions in the block.

Instead, it prevents other threads from executing the same block while a thread holds the lock.

The effect is a simulated atomic operation or sequence of instructions in your program that can be used to protect data, variables, and state shared between threads.

Python Threading Jump-Start

Loving The Tutorials?

Why not take the next step? Get the book.

Further Reading

This section provides additional resources that you may find helpful.

Python Threading Books

  • Python Threading Jump-Start , Jason Brownlee ( my book! )
  • Threading API Interview Questions
  • Threading Module API Cheat Sheet

I also recommend specific chapters in the following books:

  • See: Chapter 12: Concurrency
  • See: Chapter 7: Concurrency and Parallelism
  • See: Chapter: 14: Threads and Processes
  • Python Threading: The Complete Guide
  • Python ThreadPoolExecutor: The Complete Guide
  • Python ThreadPool: The Complete Guide
  • threading - Thread-based parallelism
  • queue — A synchronized queue class
  • Thread (computing), Wikipedia.
  • Process (computing), Wikipedia.

You now know about atomic operations in Python.

Do you have any questions? Ask your questions in the comments below and I will do my best to answer.

Photo by Harley-Davidson on Unsplash

Share this:

Related tutorials:.

' src=

About Jason Brownlee

Hi, my name is Jason Brownlee, Ph.D. and I’m the guy behind this website. I am obsessed with Python Concurrency.

I help python developers learn concurrency, super fast. Learn more .

Parallel Loops in Python

Discover how to run your loops in parallel, download your free book now:

Parallel Loops in Python

Your free book " Parallel Loops in Python " includes complete and working code templates that you can modify and use right now in your own projects.

Reader Interactions

Do you have any questions cancel reply, learn threading systematically.

Python Threading Jump-Start

What if you could develop Python programs that were concurrent from the start?

The threading module provides easy-to-use thread-based concurrency.

Introducing: " Python Threading Jump-Start ".

A new book designed to teach you the threading module step-by-step, super fast!

Insert/edit link

Enter the destination URL

Or link to existing content

Which Python Operations Are Atomic?

May 1 st , 2016

A conversation with a coworker turned me on to the fact that a surprising range of operations in Python are atomic, even operations like dictionary and class member assignment.

This wasn’t something I would have anticipated, given the number of machine language instructions that must ultimately be performed to complete an operation like hash table insertion.

The Python FAQ provides explanation and a full list of atomic operations, but the short answer is:

  • The Python bytecode interpreter only switches between threads between bytecode instructions
  • The Global Interpreter Lock (GIL) only allows a single thread to execute at a time
  • Many operations translate to a single bytecode instruction

It’s easy to check whether an operation compiles to a single bytecode instruction with dis .

So what are the caveats? Is it safe to rely on atomicity instead of using locks?

First, the linked FAQ above doesn’t make it clear to what degree this behavior is considered part of the Python spec as opposed to simply a consequence of CPython implementation details. It depends on the GIL, so it would likely be unsafe on GIL-less Pythons (IronPython, Jython, PyPy-TM). Would it be safe on non-CPython implementations with a GIL (PyPy)? I could certainly imagine possible optimizations that would invalidate the atomicity of these operations.

Second, even if not strictly necessary, locks provide clear thread-safety guarantees and also serve as useful documentation that the code is accessing shared memory. Without a lock, care must be taken since it could be easy to assume operations are atomic when they are not (postmortem example: Python’s swap is not atomic ). A clear comment is probably also necessary to head off the “Wait, this might need a lock!” reaction from collaborators.

Third, because Python allows overriding of so many builtin methods, there are edge cases where these operations are no longer atomic. The Google Python style guide advises:

Do not rely on the atomicity of built-in types. While Python’s built-in data types such as dictionaries appear to have atomic operations, there are corner cases where they aren’t atomic (e.g. if hash or eq are implemented as Python methods) and their atomicity should not be relied upon. Neither should you rely on atomic variable assignment (since this in turn depends on dictionaries).

That pretty much settles it for the general case.

There may still be some cases where it would be necessary, such as when implementing new locking functionality or in cases where performance is critical. Relying on atomicity of operations effectively allows you to piggyback on the GIL for your locking, reducing the cost of additional locks. But if lock performance is so critical, it seems like it would be better to first profile hotspots and look for other speedups.

So does it make sense to rely on the atomicity of operations when accessing or modifying shared mutable state?

Short answer: 1. you’d better have a good reason 2. you’d better do some thorough research

Otherwise, you’re better off just using a lock.

Python Enhancement Proposals

  • Python »
  • PEP Index »

PEP 583 – A Concurrency Memory Model for Python

A couple definitions, sequential consistency, zombie values, inconsistent orderings, a happens-before race that’s not a sequentially-consistent race, self-justifying values, uninitialized values (direct), uninitialized values (flag), inconsistent guarantees from relying on data dependencies, data-race-free programs are sequentially consistent, no security holes from out-of-thin-air reads, restrict reorderings instead of defining happens-before, atomic, unordered assignments, two tiers of guarantees, adapt the x86 model, upgrading or downgrading to an alternate model, acknowledgements.

This PEP describes how Python programs may behave in the presence of concurrent reads and writes to shared variables from multiple threads. We use a happens before relation to define when variable accesses are ordered or concurrent. Nearly all programs should simply use locks to guard their shared variables, and this PEP highlights some of the strange things that can happen when they don’t, but programmers often assume that it’s ok to do “simple” things without locking, and it’s somewhat unpythonic to let the language surprise them. Unfortunately, avoiding surprise often conflicts with making Python run quickly, so this PEP tries to find a good tradeoff between the two.

So far, we have 4 major Python implementations – CPython, Jython , IronPython , and PyPy – as well as lots of minor ones. Some of these already run on platforms that do aggressive optimizations. In general, these optimizations are invisible within a single thread of execution, but they can be visible to other threads executing concurrently. CPython currently uses a GIL to ensure that other threads see the results they expect, but this limits it to a single processor. Jython and IronPython run on Java’s or .NET’s threading system respectively, which allows them to take advantage of more cores but can also show surprising values to other threads.

So that threaded Python programs continue to be portable between implementations, implementers and library authors need to agree on some ground rules.

Two simple memory models

Before talking about the details of data races and the surprising behaviors they produce, I’ll present two simple memory models. The first is probably too strong for Python, and the second is probably too weak.

In a sequentially-consistent concurrent execution, actions appear to happen in a global total order with each read of a particular variable seeing the value written by the last write that affected that variable. The total order for actions must be consistent with the program order. A program has a data race on a given input when one of its sequentially consistent executions puts two conflicting actions next to each other.

This is the easiest memory model for humans to understand, although it doesn’t eliminate all confusion, since operations can be split in odd places.

Happens-before consistency

The program contains a collection of synchronization actions , which in Python currently include lock acquires and releases and thread starts and joins. Synchronization actions happen in a global total order that is consistent with the program order (they don’t have to happen in a total order, but it simplifies the description of the model). A lock release synchronizes with all later acquires of the same lock. Similarly, given t = threading.Thread(target=worker) :

  • A call to t.start() synchronizes with the first statement in worker() .
  • The return from worker() synchronizes with the return from t.join() .
  • If the return from t.start() happens before (see below) a call to t.isAlive() that returns False , the return from worker() synchronizes with that call.

We call the source of the synchronizes-with edge a release operation on the relevant variable, and we call the target an acquire operation.

The happens before order is the transitive closure of the program order with the synchronizes-with edges. That is, action A happens before action B if:

  • A falls before B in the program order (which means they run in the same thread)
  • A synchronizes with B
  • You can get to B by following happens-before edges from A.

An execution of a program is happens-before consistent if each read R sees the value of a write W to the same variable such that:

  • R does not happen before W , and
  • There is no other write V that overwrote W before R got a chance to see it. (That is, it can’t be the case that W happens before V happens before R .)

You have a data race if two conflicting actions aren’t related by happens-before.

Let’s use the rules from the happens-before model to prove that the following program prints “[7]”:

  • Because myqueue is initialized in the main thread before thread1 or thread2 is started, that initialization happens before worker1 and worker2 begin running, so there’s no way for either to raise a NameError, and both myqueue.l and myqueue.cond are set to their final objects.
  • The initialization of x in worker1 happens before it calls myqueue.put() , which happens before it calls myqueue.l.append(x) , which happens before the call to myqueue.cond.release() , all because they run in the same thread.
  • In worker2 , myqueue.cond will be released and re-acquired until myqueue.l contains a value ( x ). The call to myqueue.cond.release() in worker1 happens before that last call to myqueue.cond.acquire() in worker2 .
  • That last call to myqueue.cond.acquire() happens before myqueue.get() reads myqueue.l , which happens before myqueue.get() returns, which happens before print y , again all because they run in the same thread.
  • Because happens-before is transitive, the list initially stored in x in thread1 is initialized before it is printed in thread2.

Usually, we wouldn’t need to look all the way into a thread-safe queue’s implementation in order to prove that uses were safe. Its interface would specify that puts happen before gets, and we’d reason directly from that.

Surprising behaviors with races

Lots of strange things can happen when code has data races. It’s easy to avoid all of these problems by just protecting shared variables with locks. This is not a complete list of race hazards; it’s just a collection that seem relevant to Python.

In all of these examples, variables starting with r are local variables, and other variables are shared between threads.

This example comes from the Java memory model :

Initially p is q and p.x == 0 . Thread 1 Thread 2 r1 = p r6 = p r2 = r1.x r6.x = 3 r3 = q r4 = r3.x r5 = r1.x Can produce r2 == r5 == 0 but r4 == 3 , proving that p.x went from 0 to 3 and back to 0.

A good compiler would like to optimize out the redundant load of p.x in initializing r5 by just re-using the value already loaded into r2 . We get the strange result if thread 1 sees memory in this order:

Evaluation Computes Why r1 = p r2 = r1.x r2 == 0 r3 = q r3 is p p.x = 3 Side-effect of thread 2 r4 = r3.x r4 == 3 r5 = r2 r5 == 0 Optimized from r5 = r1.x because r2 == r1.x.

From N2177: Sequential Consistency for Atomics , and also known as Independent Read of Independent Write (IRIW).

Initially, a == b == 0 . Thread 1 Thread 2 Thread 3 Thread 4 r1 = a r3 = b a = 1 b = 1 r2 = b r4 = a We may get r1 == r3 == 1 and r2 == r4 == 0 , proving both that a was written before b (thread 1’s data), and that b was written before a (thread 2’s data). See Special Relativity for a real-world example.

This can happen if thread 1 and thread 3 are running on processors that are close to each other, but far away from the processors that threads 2 and 4 are running on and the writes are not being transmitted all the way across the machine before becoming visible to nearby threads.

Neither acquire/release semantics nor explicit memory barriers can help with this. Making the orders consistent without locking requires detailed knowledge of the architecture’s memory model, but Java requires it for volatiles so we could use documentation aimed at its implementers.

From the POPL paper about the Java memory model [#JMM-popl].

Initially, x == y == 0 . Thread 1 Thread 2 r1 = x r2 = y if r1 != 0: if r2 != 0: y = 42 x = 42 Can r1 == r2 == 42 ???

In a sequentially-consistent execution, there’s no way to get an adjacent read and write to the same variable, so the program should be considered correctly synchronized (albeit fragile), and should only produce r1 == r2 == 0 . However, the following execution is happens-before consistent:

Statement Value Thread r1 = x 42 1 if r1 != 0: true 1 y = 42 1 r2 = y 42 2 if r2 != 0: true 2 x = 42 2

WTF, you are asking yourself. Because there were no inter-thread happens-before edges in the original program, the read of x in thread 1 can see any of the writes from thread 2, even if they only happened because the read saw them. There are data races in the happens-before model.

We don’t want to allow this, so the happens-before model isn’t enough for Python. One rule we could add to happens-before that would prevent this execution is:

If there are no data races in any sequentially-consistent execution of a program, the program should have sequentially consistent semantics.

Java gets this rule as a theorem, but Python may not want all of the machinery you need to prove it.

Also from the POPL paper about the Java memory model [#JMM-popl].

Initially, x == y == 0 . Thread 1 Thread 2 r1 = x r2 = y y = r1 x = r2 Can x == y == 42 ???

In a sequentially consistent execution, no. In a happens-before consistent execution, yes: The read of x in thread 1 is allowed to see the value written in thread 2 because there are no happens-before relations between the threads. This could happen if the compiler or processor transforms the code into:

Thread 1 Thread 2 y = 42 r2 = y r1 = x x = r2 if r1 != 42: y = r1

It can produce a security hole if the speculated value is a secret object, or points to the memory that an object used to occupy. Java cares a lot about such security holes, but Python may not.

From several classic double-checked locking examples.

Initially, d == None . Thread 1 Thread 2 while not d: pass d = [3, 4] assert d[1] == 4 This could raise an IndexError, fail the assertion, or, without some care in the implementation, cause a crash or other undefined behavior.

Thread 2 may actually be implemented as:

Because the assignment to d and the item assignments are independent, the compiler and processor may optimize that to:

Which is obviously incorrect and explains the IndexError. If we then look deeper into the implementation of r1.append(3) , we may find that it and d[1] cannot run concurrently without causing their own race conditions. In CPython (without the GIL), those race conditions would produce undefined behavior.

There’s also a subtle issue on the reading side that can cause the value of d[1] to be out of date. Somewhere in the implementation of list , it stores its contents as an array in memory. This array may happen to be in thread 1’s cache. If thread 1’s processor reloads d from main memory without reloading the memory that ought to contain the values 3 and 4, it could see stale values instead. As far as I know, this can only actually happen on Alphas and maybe Itaniums, and we probably have to prevent it anyway to avoid crashes.

From several more double-checked locking examples.

Initially, d == dict() and initialized == False . Thread 1 Thread 2 while not initialized: pass d[‘a’] = 3 r1 = d[‘a’] initialized = True r2 = r1 == 3 assert r2 This could raise a KeyError, fail the assertion, or, without some care in the implementation, cause a crash or other undefined behavior.

Because d and initialized are independent (except in the programmer’s mind), the compiler and processor can rearrange these almost arbitrarily, except that thread 1’s assertion has to stay after the loop.

This is a problem with Java final variables and the proposed data-dependency ordering in C++0x.

First execute: g = [] def Init (): g . extend ([ 1 , 2 , 3 ]) return [ 1 , 2 , 3 ] h = None Then in two threads: Thread 1 Thread 2 while not h: pass r1 = Init() assert h == [1,2,3] freeze(r1) assert h == g h = r1 If h has semantics similar to a Java final variable (except for being write-once), then even though the first assertion is guaranteed to succeed, the second could fail.

Data-dependent guarantees like those final provides only work if the access is through the final variable. It’s not even safe to access the same object through a different route. Unfortunately, because of how processors work, final’s guarantees are only cheap when they’re weak.

The rules for Python

The first rule is that Python interpreters can’t crash due to race conditions in user code. For CPython, this means that race conditions can’t make it down into C. For Jython, it means that NullPointerExceptions can’t escape the interpreter.

Presumably we also want a model at least as strong as happens-before consistency because it lets us write a simple description of how concurrent queues and thread launching and joining work.

Other rules are more debatable, so I’ll present each one with pros and cons.

We’d like programmers to be able to reason about their programs as if they were sequentially consistent. Since it’s hard to tell whether you’ve written a happens-before race, we only want to require programmers to prevent sequential races. The Java model does this through a complicated definition of causality, but if we don’t want to include that, we can just assert this property directly.

If the program produces a self-justifying value, it could expose access to an object that the user would rather the program not see. Again, Java’s model handles this with the causality definition. We might be able to prevent these security problems by banning speculative writes to shared variables, but I don’t have a proof of that, and Python may not need those security guarantees anyway.

The .NET [#CLR-msdn] and x86 [#x86-model] memory models are based on defining which reorderings compilers may allow. I think that it’s easier to program to a happens-before model than to reason about all of the possible reorderings of a program, and it’s easier to insert enough happens-before edges to make a program correct, than to insert enough memory fences to do the same thing. So, although we could layer some reordering restrictions on top of the happens-before base, I don’t think Python’s memory model should be entirely reordering restrictions.

Assignments of primitive types are already atomic. If you assign 3<<72 + 5 to a variable, no thread can see only part of the value. Jeremy Manson suggested that we extend this to all objects. This allows compilers to reorder operations to optimize them, without allowing some of the more confusing uninitialized values . The basic idea here is that when you assign a shared variable, readers can’t see any changes made to the new value before the assignment, or to the old value after the assignment. So, if we have a program like:

Initially, (d.a, d.b) == (1, 2) , and (e.c, e.d) == (3, 4) . We also have class Obj(object): pass . Thread 1 Thread 2 r1 = Obj() r3 = d r1.a = 3 r4, r5 = r3.a, r3.b r1.b = 4 r6 = e d = r1 r7, r8 = r6.c, r6.d r2 = Obj() r2.c = 6 r2.d = 7 e = r2 (r4, r5) can be (1, 2) or (3, 4) but nothing else, and (r7, r8) can be either (3, 4) or (6, 7) but nothing else. Unlike if writes were releases and reads were acquires, it’s legal for thread 2 to see (e.c, e.d) == (6, 7) and (d.a, d.b) == (1, 2) (out of order).

This allows the compiler a lot of flexibility to optimize without allowing users to see some strange values. However, because it relies on data dependencies, it introduces some surprises of its own. For example, the compiler could freely optimize the above example to:

Thread 1 Thread 2 r1 = Obj() r3 = d r2 = Obj() r6 = e r1.a = 3 r4, r7 = r3.a, r6.c r2.c = 6 r5, r8 = r3.b, r6.d r2.d = 7 e = r2 r1.b = 4 d = r1

As long as it didn’t let the initialization of e move above any of the initializations of members of r2 , and similarly for d and r1 .

This also helps to ground happens-before consistency. To see the problem, imagine that the user unsafely publishes a reference to an object as soon as she gets it. The model needs to constrain what values can be read through that reference. Java says that every field is initialized to 0 before anyone sees the object for the first time, but Python would have trouble defining “every field”. If instead we say that assignments to shared variables have to see a value at least as up to date as when the assignment happened, then we don’t run into any trouble with early publication.

Most other languages with any guarantees for unlocked variables distinguish between ordinary variables and volatile/atomic variables. They provide many more guarantees for the volatile ones. Python can’t easily do this because we don’t declare variables. This may or may not matter, since python locks aren’t significantly more expensive than ordinary python code. If we want to get those tiers back, we could:

  • Introduce a set of atomic types similar to Java’s [5] or C++’s [6] . Unfortunately, we couldn’t assign to them with = .
  • Without requiring variable declarations, we could also specify that all of the fields on a given object are atomic.
  • Extend the __slots__ mechanism [7] with a parallel __volatiles__ list, and maybe a __finals__ list.

We could just adopt sequential consistency for Python. This avoids all of the hazards mentioned above, but it prohibits lots of optimizations too. As far as I know, this is the current model of CPython, but if CPython learned to optimize out some variable reads, it would lose this property.

If we adopt this, Jython’s dict implementation may no longer be able to use ConcurrentHashMap because that only promises to create appropriate happens-before edges, not to be sequentially consistent (although maybe the fact that Java volatiles are totally ordered carries over). Both Jython and IronPython would probably need to use AtomicReferenceArray or the equivalent for any __slots__ arrays.

The x86 model is:

  • Loads are not reordered with other loads.
  • Stores are not reordered with other stores.
  • Stores are not reordered with older loads.
  • Loads may be reordered with older stores to different locations but not with older stores to the same location.
  • In a multiprocessor system, memory ordering obeys causality (memory ordering respects transitive visibility).
  • In a multiprocessor system, stores to the same location have a total order.
  • In a multiprocessor system, locked instructions have a total order.
  • Loads and stores are not reordered with locked instructions.

In acquire/release terminology, this appears to say that every store is a release and every load is an acquire. This is slightly weaker than sequential consistency, in that it allows inconsistent orderings , but it disallows zombie values and the compiler optimizations that produce them. We would probably want to weaken the model somehow to explicitly allow compilers to eliminate redundant variable reads. The x86 model may also be expensive to implement on other platforms, although because x86 is so common, that may not matter much.

We can adopt an initial memory model without totally restricting future implementations. If we start with a weak model and want to get stronger later, we would only have to change the implementations, not programs. Individual implementations could also guarantee a stronger memory model than the language demands, although that could hurt interoperability. On the other hand, if we start with a strong model and want to weaken it later, we can add a from __future__ import weak_memory statement to declare that some modules are safe.

Implementation Details

The required model is weaker than any particular implementation. This section tries to document the actual guarantees each implementation provides, and should be updated as the implementations change.

Uses the GIL to guarantee that other threads don’t see funny reorderings, and does few enough optimizations that I believe it’s actually sequentially consistent at the bytecode level. Threads can switch between any two bytecodes (instead of only between statements), so two threads that concurrently execute:

with i initially 0 could easily end up with i==1 instead of the expected i==2 . If they execute:

instead, CPython 2.6 will always give the right answer, but it’s easy to imagine another implementation in which this statement won’t be atomic.

Also uses a GIL, but probably does enough optimization to violate sequential consistency. I know very little about this implementation.

Provides true concurrency under the Java memory model and stores all object fields (except for those in __slots__ ?) in a ConcurrentHashMap , which provides fairly strong ordering guarantees. Local variables in a function may have fewer guarantees, which would become visible if they were captured into a closure that was then passed to another thread.

Provides true concurrency under the CLR memory model, which probably protects it from uninitialized values . IronPython uses a locked map to store object fields, providing at least as many guarantees as Jython.

Thanks to Jeremy Manson and Alex Martelli for detailed discussions on what this PEP should look like.

This document has been placed in the public domain.

Source: https://github.com/python/peps/blob/main/peps/pep-0583.rst

Last modified: 2023-09-09 17:39:29 GMT

Python – Is Python variable assignment atomic

python signals

Let's say I am using a signal handler for handling an interval timer.

Can I set SomeGlobalVariable without worrying that, in an unlikely scenario that whilst setting SomeGlobalVariable (i.e. the Python VM was executing bytecode to set the variable), that the assignment within the signal handler will break something? (i.e. meta-stable state)

Update : I am specifically interested in the case where a "compound assignment" is made outside of the handler.

(maybe I am thinking too "low level" and this is all taken care of in Python… coming from an Embedded Systems background, I have these sorts of impulses from time to time)

Best Answer

Simple assignment to simple variables is "atomic" AKA threadsafe (compound assignments such as += or assignments to items or attributes of objects need not be, but your example is a simple assignment to a simple, albeit global, variable, thus safe).

Related Solutions

Python – how to execute a program or call a system command.

Use the subprocess module in the standard library:

The advantage of subprocess.run over os.system is that it is more flexible (you can get the stdout , stderr , the "real" status code , better error handling , etc...).

Even the documentation for os.system recommends using subprocess instead:

The subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using this function. See the Replacing Older Functions with the subprocess Module section in the subprocess documentation for some helpful recipes.

On Python 3.4 and earlier, use subprocess.call instead of .run :

Python – What are metaclasses in Python

Classes as objects.

Before understanding metaclasses, you need to master classes in Python. And Python has a very peculiar idea of what classes are, borrowed from the Smalltalk language.

In most languages, classes are just pieces of code that describe how to produce an object. That's kinda true in Python too:

But classes are more than that in Python. Classes are objects too.

Yes, objects.

As soon as you use the keyword class , Python executes it and creates an object . The instruction

creates in memory an object with the name ObjectCreator .

This object (the class) is itself capable of creating objects (the instances), and this is why it's a class .

But still, it's an object, and therefore:

  • you can assign it to a variable
  • you can copy it
  • you can add attributes to it
  • you can pass it as a function parameter

Creating classes dynamically

Since classes are objects, you can create them on the fly, like any object.

First, you can create a class in a function using class :

But it's not so dynamic, since you still have to write the whole class yourself.

Since classes are objects, they must be generated by something.

When you use the class keyword, Python creates this object automatically. But as with most things in Python, it gives you a way to do it manually.

Remember the function type ? The good old function that lets you know what type an object is:

Well, type has a completely different ability, it can also create classes on the fly. type can take the description of a class as parameters, and return a class.

(I know, it's silly that the same function can have two completely different uses according to the parameters you pass to it. It's an issue due to backward compatibility in Python)

type works this way:

  • name : name of the class
  • bases : tuple of the parent class (for inheritance, can be empty)
  • attrs : dictionary containing attributes names and values

can be created manually this way:

You'll notice that we use MyShinyClass as the name of the class and as the variable to hold the class reference. They can be different, but there is no reason to complicate things.

type accepts a dictionary to define the attributes of the class. So:

Can be translated to:

And used as a normal class:

And of course, you can inherit from it, so:

Eventually, you'll want to add methods to your class. Just define a function with the proper signature and assign it as an attribute.

And you can add even more methods after you dynamically create the class, just like adding methods to a normally created class object.

You see where we are going: in Python, classes are objects, and you can create a class on the fly, dynamically.

This is what Python does when you use the keyword class , and it does so by using a metaclass.

What are metaclasses (finally)

Metaclasses are the 'stuff' that creates classes.

You define classes in order to create objects, right?

But we learned that Python classes are objects.

Well, metaclasses are what create these objects. They are the classes' classes, you can picture them this way:

You've seen that type lets you do something like this:

It's because the function type is in fact a metaclass. type is the metaclass Python uses to create all classes behind the scenes.

Now you wonder "why the heck is it written in lowercase, and not Type ?"

Well, I guess it's a matter of consistency with str , the class that creates strings objects, and int the class that creates integer objects. type is just the class that creates class objects.

You see that by checking the __class__ attribute.

Everything, and I mean everything, is an object in Python. That includes integers, strings, functions and classes. All of them are objects. And all of them have been created from a class:

Now, what is the __class__ of any __class__ ?

So, a metaclass is just the stuff that creates class objects.

You can call it a 'class factory' if you wish.

type is the built-in metaclass Python uses, but of course, you can create your own metaclass.

The __metaclass__ attribute

In Python 2, you can add a __metaclass__ attribute when you write a class (see next section for the Python 3 syntax):

If you do so, Python will use the metaclass to create the class Foo .

Careful, it's tricky.

You write class Foo(object) first, but the class object Foo is not created in memory yet.

Python will look for __metaclass__ in the class definition. If it finds it, it will use it to create the object class Foo . If it doesn't, it will use type to create the class.

Read that several times.

When you do:

Python does the following:

Is there a __metaclass__ attribute in Foo ?

If yes, create in-memory a class object (I said a class object, stay with me here), with the name Foo by using what is in __metaclass__ .

If Python can't find __metaclass__ , it will look for a __metaclass__ at the MODULE level, and try to do the same (but only for classes that don't inherit anything, basically old-style classes).

Then if it can't find any __metaclass__ at all, it will use the Bar 's (the first parent) own metaclass (which might be the default type ) to create the class object.

Be careful here that the __metaclass__ attribute will not be inherited, the metaclass of the parent ( Bar.__class__ ) will be. If Bar used a __metaclass__ attribute that created Bar with type() (and not type.__new__() ), the subclasses will not inherit that behavior.

Now the big question is, what can you put in __metaclass__ ?

The answer is something that can create a class.

And what can create a class? type , or anything that subclasses or uses it.

Metaclasses in Python 3

The syntax to set the metaclass has been changed in Python 3:

i.e. the __metaclass__ attribute is no longer used, in favor of a keyword argument in the list of base classes.

The behavior of metaclasses however stays largely the same .

One thing added to metaclasses in Python 3 is that you can also pass attributes as keyword-arguments into a metaclass, like so:

Read the section below for how Python handles this.

Custom metaclasses

The main purpose of a metaclass is to change the class automatically, when it's created.

You usually do this for APIs, where you want to create classes matching the current context.

Imagine a stupid example, where you decide that all classes in your module should have their attributes written in uppercase. There are several ways to do this, but one way is to set __metaclass__ at the module level.

This way, all classes of this module will be created using this metaclass, and we just have to tell the metaclass to turn all attributes to uppercase.

Luckily, __metaclass__ can actually be any callable, it doesn't need to be a formal class (I know, something with 'class' in its name doesn't need to be a class, go figure... but it's helpful).

So we will start with a simple example, by using a function.

Let's check:

Now, let's do exactly the same, but using a real class for a metaclass:

Let's rewrite the above, but with shorter and more realistic variable names now that we know what they mean:

You may have noticed the extra argument cls . There is nothing special about it: __new__ always receives the class it's defined in, as the first parameter. Just like you have self for ordinary methods which receive the instance as the first parameter, or the defining class for class methods.

But this is not proper OOP. We are calling type directly and we aren't overriding or calling the parent's __new__ . Let's do that instead:

We can make it even cleaner by using super , which will ease inheritance (because yes, you can have metaclasses, inheriting from metaclasses, inheriting from type):

Oh, and in Python 3 if you do this call with keyword arguments, like this:

It translates to this in the metaclass to use it:

That's it. There is really nothing more about metaclasses.

The reason behind the complexity of the code using metaclasses is not because of metaclasses, it's because you usually use metaclasses to do twisted stuff relying on introspection, manipulating inheritance, vars such as __dict__ , etc.

Indeed, metaclasses are especially useful to do black magic, and therefore complicated stuff. But by themselves, they are simple:

  • intercept a class creation
  • modify the class
  • return the modified class

Why would you use metaclasses classes instead of functions?

Since __metaclass__ can accept any callable, why would you use a class since it's obviously more complicated?

There are several reasons to do so:

  • The intention is clear. When you read UpperAttrMetaclass(type) , you know what's going to follow
  • You can use OOP. Metaclass can inherit from metaclass, override parent methods. Metaclasses can even use metaclasses.
  • Subclasses of a class will be instances of its metaclass if you specified a metaclass-class, but not with a metaclass-function.
  • You can structure your code better. You never use metaclasses for something as trivial as the above example. It's usually for something complicated. Having the ability to make several methods and group them in one class is very useful to make the code easier to read.
  • You can hook on __new__ , __init__ and __call__ . Which will allow you to do different stuff, Even if usually you can do it all in __new__ , some people are just more comfortable using __init__ .
  • These are called metaclasses, damn it! It must mean something!

Why would you use metaclasses?

Now the big question. Why would you use some obscure error-prone feature?

Well, usually you don't:

Metaclasses are deeper magic that 99% of users should never worry about it. If you wonder whether you need them, you don't (the people who actually need them to know with certainty that they need them and don't need an explanation about why).

Python Guru Tim Peters

The main use case for a metaclass is creating an API. A typical example of this is the Django ORM. It allows you to define something like this:

But if you do this:

It won't return an IntegerField object. It will return an int , and can even take it directly from the database.

This is possible because models.Model defines __metaclass__ and it uses some magic that will turn the Person you just defined with simple statements into a complex hook to a database field.

Django makes something complex look simple by exposing a simple API and using metaclasses, recreating code from this API to do the real job behind the scenes.

The last word

First, you know that classes are objects that can create instances.

Well, in fact, classes are themselves instances. Of metaclasses.

Everything is an object in Python, and they are all either instance of classes or instances of metaclasses.

Except for type .

type is actually its own metaclass. This is not something you could reproduce in pure Python, and is done by cheating a little bit at the implementation level.

Secondly, metaclasses are complicated. You may not want to use them for very simple class alterations. You can change classes by using two different techniques:

  • monkey patching
  • class decorators

99% of the time you need class alteration, you are better off using these.

But 98% of the time, you don't need class alteration at all.

Related Topic

  • Python – Does Python have a ternary conditional operator
  • Python – How to get the current time in Python
  • Python – How to pass a variable by reference
  • Python – How to concatenate two lists in Python
  • Python – Manually raising (throwing) an exception in Python
  • Python – List changes unexpectedly after assignment. Why is this and how can I prevent it
  • Python – Does Python have a string ‘contains’ substring method
  • Python – How to access environment variable values

CS105: Introduction to Python

python variable assignment atomic

Getting Started with Data

We have introduced several basic Python data structures: lists, strings, sets, tuples and dictionaries. Take some time to review, compare and contrast these constructs for handling various kinds of collections.

Built-in Atomic Data Types

 Table 1: Relational and Logical Operators

python variable assignment atomic

Is Python variable assignment atomic?

Let’s say I am using a signal handler for handling an interval timer.

Can I set SomeGlobalVariable without worrying that, in an unlikely scenario that whilst setting SomeGlobalVariable (i.e. the Python VM was executing bytecode to set the variable), that the assignment within the signal handler will break something? (i.e. meta-stable state)

Update : I am specifically interested in the case where a “compound assignment” is made outside of the handler.

(maybe I am thinking too “low level” and this is all taken care of in Python… coming from an Embedded Systems background, I have these sorts of impulses from time to time)

Simple assignment to simple variables is “atomic” AKA threadsafe (compound assignments such as += or assignments to items or attributes of objects need not be, but your example is a simple assignment to a simple, albeit global, variable, thus safe).

Compound assignment involves three steps: read-update-write. This is a race condition if another thread is run and writes a new value to the location after the read happens, but before the write. In this case a stale value is being updated and written back, which will clobber whatever new value was written by the other thread. In Python anything that involves the execution of a single byte code SHOULD be atomic, but compound assignment does not fit this criteria. Use a lock.

Google’s Style Guide advises against it

I’m not claiming that Google styleguides are the ultimate truth, but the rationale in the "Threading" section gives some insight (highlight is mine):

Do not rely on the atomicity of built-in types. While Python’s built-in data types such as dictionaries appear to have atomic operations, there are corner cases where they aren’t atomic (e.g. if __hash__ or __eq__ are implemented as Python methods) and their atomicity should not be relied upon. Neither should you rely on atomic variable assignment (since this in turn depends on dictionaries). Use the Queue module’s Queue data type as the preferred way to communicate data between threads. Otherwise, use the threading module and its locking primitives. Learn about the proper use of condition variables so you can use threading.Condition instead of using lower-level locks.

So my interpretation is that in Python everything is dict-like and when you do a = b in the backend somewhere globals['a'] = b is happening, which is bad since dicts are not necessarily thread safe.

For a single variable, Queue is not ideal however since we want it to hold just one element, and I could not find a perfect pre-existing container in the stdlib that automatically synchronizes a .set() method. So for now I’m doing just:

It is interesting that Martelli does not seem to mind that Google style guide recommendation 🙂 (he works at Google)

I wonder if the CPython GIL has implications to this question: What is the global interpreter lock (GIL) in CPython?

This thread also suggests that CPython dicts are thread safe, including the following glossary quote that explicitly mentions it https://docs.python.org/3/glossary.html#term-global-interpreter-lock

This simplifies the CPython implementation by making the object model (including critical built-in types such as dict) implicitly safe against concurrent access.

you can try dis to see the underlying bytecode.

produces the bytecode:

So the assignment is a single python bytecode (instruction 2), which is atomic in CPython since it executes one bytecode at a time.

whereas, adding one a += 1 :

+= corresponds to 4 instructions, which is not atomic.

Sign up on Python Hint

Join our community of friendly folks discovering and sharing the latest topics in tech.

We'll never post to any of your accounts without your permission.

Is Python variable assignment atomic?

on 10 months ago

Determining atomicity of Python variable assignment

What are atomic operations in python, impact of non-atomic variable assignments in python, conclusion:, port matlab bounding ellipsoid code to python.

31453 views

10 months ago

svg diagrams using python

55293 views

Passing on named variable arguments in python

94800 views

Why do I get a KeyError when using pandas apply?

94084 views

How to pass variables from python script to bash script

16233 views

Related Posts

How to get the Worksheet ID from a Google Spreadsheet with python?

Python setup.py: ask for configuration data during setup, how to make the shebang be able to choose the correct python interpreter between python3 and python3.5, how to pass const char* from python to c function, how to use plotly/dash (python) completely offline, python - matplotlib - how do i plot a plane from equation, in django/python, how do i set the memcache to infinite time, how do you reload a module in python version 3.3.2, python 3.4 - library for 2d graphics, failed to load the native tensorflow runtime. python 3.5.2, c++ vs python precision, how can i make my code more readable and dryer when working with xml namespaces in python, how to use python multiprocessing module in django view, calculating power for decimals in python, color logging using logging module in python, installation.

Copyright 2023 - Python Hint

Term of Service

Privacy Policy

Cookie Policy

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Chemistry LibreTexts

2.3: Input, Variables and Arithmetic Operations

  • Last updated
  • Save as PDF
  • Page ID 366492

Learning Objectives

  • builtin functions (input(), print() and type() functions)
  • library function from package
  • variables (float, string and integer)
  • arithmetic operations on both numbers and strings (concatenation)
  • acquire data from API
  • 3 inputs are user entered values 
  • 1 input is called from a web API

Prior Knowledge

  • variable types
  • arithmetic operations

Further reading

Note: this activity requires you to use an IDE.  In this class we will be using the Thonny IDE, which is the default IDE on the Raspberry Pi, but any IDE will work.

Create a folder called chem4399 or chem5399 on your personal computer, and in this create a subfolder called "Python files".  In the Python files folder create an additional subfolder called "py03", and in here you will store the files associated with this activity. Remember, you overwrite a file if you change it and then run it on Thonny, so either comment out code you change or save as a new file before running.

Input and Print Functions

In lecture we discussed three "types" of functions:

  • "Builtin Functions" - those that come with the standard installation of python
  • "Library Functions" - those that can be added by installing a package
  • "User Defined Functions" - those that you can make

The input and print functions are of the first type.  All functions in python are followed by a parenthesis, which relates to the arguments they can operate on.  

input() 

In this activity we will use the input() function to input the name, atomic number and atomic mass of an element and assign each to a variable name.  We will then use the print() function to print to the shell the values we assign to each variable.  After this we will investigate the kind of data each variable represents.

  • Click File/New
  • Click Save, move to the folder py03 and name py03_activty1
  • Under the View menu make sure the shell and variables options are checked
  • Input the following code in the editor (so they are on line 1 & 2 of the editor)

In the above code you are using the assignment operator to assign the value you input to the variable "element_name". Run the above script by typing "F5" or hitting the green arrow on the Thonny menu.

Exercise \(\PageIndex{1}\)

clipboard_e6f85bcade62dec44439d434d2add6b30.png

It is a string because it is in quotation marks.

  • Highlight the text and copy the text  =input("input the name of an element: ") . 
  • In the next line type atomic_number and paste copied script,
  • In the next line type atomic_weight and paste the copied script
  • Now change the word "name" to "atomic number" and "atomic weight"

your code should look like this

Exercise \(\PageIndex{2}\)

What kind of data is the variables element_name, atomic_number and atomic_weight?

They are all string as when you look at the variables window, they are all in quotation marks.

The input function inverts everything as a string, including numbers. These must be converted to either floating decimal or integer data types if you wish to perform arithmetic operations on them.

Python allows you to change the data type of a variable. In the next script we will assign the atomic number to an integer and the atomic weight to a floating decimal.

int() and float() functions 

  • Now convert the variables atomic_number and atomic_weight to integer and floating decimals using the int() and float() functions.

note, (figure \(\PageIndex{1}\)) shows how autocomplete can assist

clipboard_eee4fe11207dd9ae6b89b5c12362fc608.png

Your code should look like this

clipboard_e68618123a1eb6976b7912c85cd84079a.png

You can use the int() and float() functions when you assign the variables.  Here the input() function is nested inside of the int() or float()function

You can even input more than one variable of more than one type in a single line

It is best practice not to have more than 80 characters on a line, and so everything to the right can be put into parenthesis and wrapped around multiple lines of code

The print function (print()) is an output that prints the code to the shell. The print function can call variables and embed them into a string.

We can even put the output in a sentence

type() 

The type function will tell you the type of an object

Try the following in Thonny (Be sure you ran the code which inputs data for the variables)

Arithmetic Operations

The following symbols are used for arithmetic operations:  

Let's input two numbers and add them

So in order to do arithmetic operations we first need to convert the strings to either integers or floating decmial data types.

Floor Division and the modulus at first will often confuse students, so let's refresh your elementary school long division, and note that long division gives the quotient and the modulus gives the remainder.

clipboard_edc716a65aec32e217c511b43b8ab2bbd.png

Note, this is commonly written as \(dividend \div divisor \; or\; \frac{dividend}{divisor}\)

Concatenation

Merriam-Webster's dictionary defines concatenate to mean "linked together". In code box #10 you did not add your two numbers but you concatenated them.  So 1+1=2 if it is an integer, but 1+1= 11 if they are strings. Concatenation can be very useful for obtaining information off the web. In this example we are going to use concatenation with the Pubchem PUG REST  API (Power User Gateway Representational State Transfer Application Program Interface) to obtain the molar mass of a chemical.  We are going to do this for the common name, but with more effort we could also do this for synonyms. An API is like a GUI (Graphical User Interface) in that the API allows interaction between two computers while the GUI allows a person to interact with a computer. When you navigate to a webpage the web address in the hyperlink is a URL (Universal Resource Locater), while you can also address items within a database by a URI (Universal Resource Identifier). The W3C (World Wide Web Consortium) set forth a RDF (Resource Descriptive Framework) that involves an RDF triple (subject, predicate, object) relationship and we will take advantage of that to obtain the molar mass by concatenating a string into a URI.  OK, it looks complicated, but it is simple.  The subject is a molecule (which we will input and assign to a variable), the predicate is a property of a molecule (its molar mass, but we could seek other properties) and the object is the value of the molar mass for the molecule, which we will assign to a new variable and be able to use in a calculation.

First, place the following link into a new tab of your browser:

pubchem.ncbi.nlm.nih.gov/rest/pug/compound/name/aspirin/property/MolecularWeight/txt

This should return 180.16, which is the molar mass of aspirin as found in PubChem's compound record for aspirin .  If you change the word aspirin in the above link to another molecule, like caffeine, it will give you the molar mass of caffeine. Note, compound words like hydrochloric acid need a dash (hydrochloric-acid).

The above script gives you the uri for the value but you wish to assign it to a variable. For this we are going to need to install the request package from the PyPi package index. On the Thonny menu go to Tools/Manage Packages and type in requests and then Search on PyPi. Figure \(\PageIndex{4}\) shows it is already installed on this version of python, but it is probably not installed on yours.

clipboard_eaa84b2c216eda8387c019ae2a076f59d.png

The requests package will give us new functions and classes that we can use to assign the molar mass to a variable.

Next week we will look into how to format the print statement, but for now we have shown that there is another way to input data into a program, and that is through a web API.  When we start streaming IOT data we will be interacting with other types of web APIs.

String multiplication

You can also multiply strings

Assignment:

Write a Python program called Py_03_Molarity_Calculator, which tells you how many grams of a solute you need to make X ml of Y Molar solutions.  There are four inputs to this program as described in the following flow chart

You need to make extensive use of comments in your code and the first few lines need to identify who you are, which assignment this is, and what it does.

Note you can either use # to comment out a line, or three quotation marks to enclose a comment. In some IDEs the three quotation marks shows up in the help menu.

clipboard_e3d4a75f0ea03e23f6ba2e584c12236e1.png

NOTE: Undergrads may either input the molar mass as a typed in variable, or through the PubChem REST API. Grad students need to use the Pubchem REST API. In fact it is strongly encouraged that everyone use the API, as that gives you experience with installing a package in python. The following image is what you should see in the command line

clipboard_eb133f49bd6958ff9a320584f20f6bcb7.png

Upload a running version of the program to your Google Drive folder for grading.

Search code, repositories, users, issues, pull requests...

Provide feedback.

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly.

To see all available qualifiers, see our documentation .

  • Notifications

Lock-Free Atomics in Python

doodspav/atomics

Folders and files, repository files navigation.

This library implements a wrapper around the lower level patomic C library (which is provided as part of this library through the build_patomic command in setup.py ).

It exposes hardware level lock-free (and address-free) atomic operations on a memory buffer, either internally allocated or externally provided, via a set of atomic classes.

The operations in these classes are both thread-safe and process-safe, meaning that they can be used on a shared memory buffer for interprocess communication (including with other languages such as C/C++).

Table of Contents

Multi-threading, multi-processing, construction, special methods, memory order, future thoughts, contributing.

Linux/MacOS:

This library requires Python3.6+, and has a dependency on the cffi library. While the code here has no dependency on any implementation specific features, the cffi library functions used are likely to not work outside of CPython and PyPy.

Binaries are provided for the following platforms:

  • Windows [x86, amd64]
  • MacOSX [x86_64, universal2]
  • Linux [i686, x86_64, aarch64, ppc64le, s390x] [manylinux2014, musllinux_1_1]
  • Linux [i686, x86_64] [manylinux1]

If you are on one of these platforms and pip tries to build from source or fails to install, make sure that you have the latest version of pip installed. This can be done like so:

If you need to build from source, check out the Building section as there are additional requirements for that.

The following example has a data race ( a is modified from multiple threads). The program is not correct, and a 's value will not equal total at the end.

This example implements the previous example but a is now an AtomicInt which can be safely modified from multiple threads (as opposed to int which can't). The program is correct, and a will equal total at the end.

This example is the counterpart to the above correct code, but using processes to demonstrate that atomic operations are also safe across processes. This program is also correct, and a will equal total at the end. It is also how one might communicate with processes written in other languages such as C/C++.

NOTE: Although shared_memory is showcased here, atomicview accepts any type that supports the buffer protocol as its buffer argument, so other sources of shared memory such as mmap could be used instead.

The following helper (abstract-ish base) types are available in atomics :

  • [ ANY , INTEGRAL , BYTES , INT , UINT ]

This library provides the following Atomic classes in atomics.base :

  • Atomic --- ANY
  • AtomicIntegral --- INTEGRAL
  • AtomicBytes --- BYTES
  • AtomicInt --- INT
  • AtomicUint --- UINT

These Atomic classes are constructable on their own, but it is strongly suggested using the atomic() function to construct them. Each class corresponds to one of the above helper types (as indicated).

This library also provides Atomic*View (in atomics.view ) and Atomic*ViewContext (in atomics.ctx ) counterparts to the Atomic* classes, corresponding to the same helper types.

The latter of the two sets of classes can be constructed manually, although it is strongly suggested using the atomicview() function to construct them. The former set of classes cannot be constructed manually with the available types, and should only be obtained by called .__enter__() on a corresponding Atomic*ViewContext object.

Even though you should never need to directly use these classes (apart from the helper types), they are provided to be used in type hinting. The inheritance hierarchies are detailed in the ARCHITECTURE.md file (available on GitHub).

This library provides the functions atomic and atomicview , along with the types BYTES , INT , and UINT (as well as ANY and INTEGRAL ) to construct atomic objects like so:

You should only need to construct objects with an atype of BYTES , INT , or UINT . Using an atype of ANY or INTGERAL will require additional kwargs, and an atype of ANY will result in an object that doesn't actually expose any atomic operations (only properties, explained in sections further on).

The atomic() function returns a corresponding Atomic* object.

The atomicview() function returns a corresponding Atomic*ViewContext object. You can use this context object in a with statement to obtain an Atomic*View object. The buffer parameter may be any object that supports the buffer protocol.

Construction can raise UnsupportedWidthException and AlignmentError .

NOTE: the width property of Atomic*View objects is derived from the buffer's length as if it were contiguous. It is equivalent to calling memoryview(buf).nbytes .

Objects of Atomic* classes (i.e. objects returned by the atomic() function) have a self-contained buffer which is automatically freed. They can be passed around and stored liked regular variables, and there is nothing special about their lifetime.

Objects of Atomic*ViewContext classes (i.e. objects returned by the atomicview() function) and Atomic*View objects obtained from said objects have a much stricter usage contract.

The buffer used to construct an Atomic*ViewContext object (either directly or through atomicview() ) MUST NOT be invalidated until .release() is called. This is aided by the fact that .release() is called automatically in .__exit__(...) and .__del__() . As long as you immediately use the context object in a with statement, and DO NOT invalidate the buffer inside that with scope, you will always be safe.

The protections implemented are shown in this example:

Furthermore, in CPython, all built-in types supporting the buffer protocol will throw a BufferError exception if you try to invalidate them while they're in use (i.e. before calling .release() ).

As a last resort, if you absolutely must invalidate the buffer inside the with context (where you can't call .release() ), you may call .__exit__(...) manually on the Atomic*ViewContext object. This is to force explicitness about something considered to be bad practice and dangerous.

Where it's allowed, .release() may be called multiple times with no ill-effects. This also applies to .__exit__(...) , which has no restrictions on where it can be called.

Different platforms may each have their own alignment requirements for atomic operations of given widths. This library provides the Alignment class in atomics to ensure that a given buffer meets these requirements.

If an atomic class is constructed from a misaligned buffer, the constructor will raise AlignmentError .

By default, .is_valid calls .is_valid_recommended . The class Alignment also exposes .is_valid_minimum . Currently, no atomic class makes use of the minimum alignment, so checking for it is pointless. Support for it will be added in a future release.

All Atomic* and Atomic*View classes have the following properties:

  • width : width in bytes of the underlying buffer (as if it were contiguous)
  • readonly : whether the object supports modifying operations
  • ops_supported : a sorted list of OpType enum values representing which operations are supported on the object

Integral Atomic* and Atomic*View classes also have the following property:

  • signed : whether arithmetic operations are signed or unsigned

In both cases, the behaviour on overflow is defined to wraparound.

Base Atomic and AtomicView objects (corresponding to ANY ) expose no atomic operations.

AtomicBytes and AtomicBytesView objects support the following operations:

  • [base] : load , store
  • [xchg] : exchange , cmpxchg_weak , cmpxchg_strong
  • [bitwise] : bit_test , bit_compl , bit_set , bit_reset
  • [binary] : bin_or , bin_xor , bin_and , bin_not
  • [binary] : bin_fetch_or , bin_fetch_xor , bin_fetch_and , bin_fetch_not

Integral Atomic* and Atomic*View classes additionally support the following operations:

  • [arithmetic] : add , sub , inc , dec , neg
  • [arithmetic] : fetch_add , fetch_sub , fetch_inc , fetch_dec , fetch_neg

The usage of (most of) these functions is modelled directly on the C++11 std::atomic implementation found here .

Compare Exchange ( cmpxchg_* )

The cmpxchg_* functions return CmpxchgResult . This has the attributes .success: bool which indicates whether the exchange took place, and .expected: T which holds the original value of the atomic object. The cmpxchg_weak function may fail spuriously, even if expected matches the actual value. It should be used as shown below:

In a real implementation of atomic_mul , care should be taken to ensure that desired fits in a (i.e. desired.bit_length() < (a.width * 8) , assuming 8 bits in a byte).

All operations can raise UnsupportedOperationException (so check .ops_supported if you need to be sure).

Operations load , store , and cmpxchg_* can raise MemoryOrderError if called with an invalid memory order. MemoryOrder enum values expose the functions is_valid_store_order() , is_valid_load_order() , and is_valid_fail_order() to check with.

AtomicBytes and AtomicBytesView implement the __bytes__ special method.

Integral Atomic* and Atomic*View classes implement the __int__ special method. They intentionally do not implement __index__ .

There is a notable lack of any classes implementing special methods corresponding to atomic operations; this is intentional. Assignment in Python is not available as a special method, and we do not want to encourage people to use other special methods with this class, lest it lead to them accidentally using assignment when they meant .store(...) .

The MemoryOrder enum class is provided in atomics , and the memory orders are directly copied from C++11's std::memory_order documentation found here , except for CONSUME (which would be pointless to expose in this library).

All operations have a default memory order, SEQ_CST . This will enforce sequential consistency, and essentially make your multi-threaded and/or multi-processed program be as correct as if it were to run in a single thread.

IF YOU DO NOT UNDERSTAND THE LINKED DOCUMENTATION, DO NOT USE YOUR OWN MEMORY ORDERS!!! Stick with the defaults to be safe. (And realistically, this is Python, you won't get a noticeable performance boost from using a more lax memory order).

The following helper functions are provided:

  • .is_valid_store_order() (for store op)
  • .is_valid_load_order() ( for load op)
  • .is_valid_fail_order() (for the fail ordering in cmpxchg_* ops)

Passing an invalid memory order to one of these ops will raise MemoryOrderError .

The following exceptions are available in atomics.exc :

  • AlignmentError
  • MemoryOrderError
  • UnsupportedWidthException
  • UnsupportedOperationException

IMPORTANT: Make sure you have the latest version of pip installed.

Using setup.py 's build or bdist_wheel commands will run the build_patomic command (which you can also run directly).

This clones the patomic library into a temporary directory, builds it, and then copies the shared library into atomics._clib .

This requires that git be installed on your system (a requirement of the GitPython module). You will also need an ANSI/C90 compliant C compiler (although ideally a more recent compiler should be used). CMake is also required but should be automatically pip install 'd if not available.

If you absolutely cannot get build_patomic to work, go to patomic , follow the instructions on building it (making sure to build the shared library version), and then copy-paste the shared library file into atomics._clib manually.

NOTE: Currently, the library builds a dummy extension in order to trick setuptools into building a non-purepython wheel. If you are ok with a purepython wheel, then feel free to remove the code for that from setup.py (at the bottom). Otherwise, you will need a C99 compliant C compiler, and probably the development libraries/headers for whichever version of Python you're using.

  • add docstrings
  • add support for minimum alignment
  • add support for constructing Atomic classes' buffers in shared memory
  • add support for passing Atomic objects to sub-processes and sub-interpreters
  • reimplement in C or Cython for performance gains (preliminary benchmarks put such implementations at 2x the speed of a raw int )

I don't have a guide for contributing yet. This section is here to make the following two points:

  • new operations must first be implemented in patomic before this library can be updated
  • new architectures, widths, and existing unsupported operations must be supported in patomic (no change required in this library)
  • Python 100.0%

atomics 1.0.2

pip install atomics Copy PIP instructions

Released: Dec 10, 2021

Atomic lock-free primitives

Project links

  • Open issues:

View statistics for this project via Libraries.io , or by using our public dataset on Google BigQuery

License: GNU General Public License v3 (GPLv3)

Author: doodspav

Tags atomic, atomics, lock-free, lock free

Requires: Python <4, >=3.6

Maintainers

Avatar for doodspav from gravatar.com

Classifiers

  • 5 - Production/Stable
  • OSI Approved :: GNU General Public License v3 (GPLv3)
  • OS Independent
  • Python :: 3
  • Python :: 3 :: Only
  • Python :: 3.6
  • Python :: 3.7
  • Python :: 3.8
  • Python :: 3.9
  • Python :: 3.10
  • Python :: 3.11
  • Python :: Implementation :: CPython
  • Python :: Implementation :: PyPy
  • System :: Hardware :: Symmetric Multi-processing

Project description

This library implements a wrapper around the lower level patomic C library (which is provided as part of this library through the build_patomic command in setup.py ).

It exposes hardware level lock-free (and address-free) atomic operations on a memory buffer, either internally allocated or externally provided, via a set of atomic classes.

The operations in these classes are both thread-safe and process-safe, meaning that they can be used on a shared memory buffer for interprocess communication (including with other languages such as C/C++).

Table of Contents

Multi-threading, multi-processing, construction, special methods, memory order, future thoughts, contributing.

Linux/MacOS:

This library requires Python3.6+, and has a dependency on the cffi library. While the code here has no dependency on any implementation specific features, the cffi library functions used are likely to not work outside of CPython and PyPy.

Binaries are provided for the following platforms:

  • Windows [x86, amd64]
  • MacOSX [x86_64, universal2]
  • Linux [i686, x86_64, aarch64, ppc64le, s390x] [manylinux2014, musllinux_1_1]
  • Linux [i686, x86_64] [manylinux1]

If you are on one of these platforms and pip tries to build from source or fails to install, make sure that you have the latest version of pip installed. This can be done like so:

If you need to build from source, check out the Building section as there are additional requirements for that.

The following example has a data race ( a is modified from multiple threads). The program is not correct, and a 's value will not equal total at the end.

This example implements the previous example but a is now an AtomicInt which can be safely modified from multiple threads (as opposed to int which can't). The program is correct, and a will equal total at the end.

This example is the counterpart to the above correct code, but using processes to demonstrate that atomic operations are also safe across processes. This program is also correct, and a will equal total at the end. It is also how one might communicate with processes written in other languages such as C/C++.

NOTE: Although shared_memory is showcased here, atomicview accepts any type that supports the buffer protocol as its buffer argument, so other sources of shared memory such as mmap could be used instead.

The following helper (abstract-ish base) types are available in atomics :

  • [ ANY , INTEGRAL , BYTES , INT , UINT ]

This library provides the following Atomic classes in atomics.base :

  • Atomic --- ANY
  • AtomicIntegral --- INTEGRAL
  • AtomicBytes --- BYTES
  • AtomicInt --- INT
  • AtomicUint --- UINT

These Atomic classes are constructable on their own, but it is strongly suggested using the atomic() function to construct them. Each class corresponds to one of the above helper types (as indicated).

This library also provides Atomic*View (in atomics.view ) and Atomic*ViewContext (in atomics.ctx ) counterparts to the Atomic* classes, corresponding to the same helper types.

The latter of the two sets of classes can be constructed manually, although it is strongly suggested using the atomicview() function to construct them. The former set of classes cannot be constructed manually with the available types, and should only be obtained by called .__enter__() on a corresponding Atomic*ViewContext object.

Even though you should never need to directly use these classes (apart from the helper types), they are provided to be used in type hinting. The inheritance hierarchies are detailed in the ARCHITECTURE.md file (available on GitHub).

This library provides the functions atomic and atomicview , along with the types BYTES , INT , and UINT (as well as ANY and INTEGRAL ) to construct atomic objects like so:

You should only need to construct objects with an atype of BYTES , INT , or UINT . Using an atype of ANY or INTGERAL will require additional kwargs, and an atype of ANY will result in an object that doesn't actually expose any atomic operations (only properties, explained in sections further on).

The atomic() function returns a corresponding Atomic* object.

The atomicview() function returns a corresponding Atomic*ViewContext object. You can use this context object in a with statement to obtain an Atomic*View object. The buffer parameter may be any object that supports the buffer protocol.

Construction can raise UnsupportedWidthException and AlignmentError .

NOTE: the width property of Atomic*View objects is derived from the buffer's length as if it were contiguous. It is equivalent to calling memoryview(buf).nbytes .

Objects of Atomic* classes (i.e. objects returned by the atomic() function) have a self-contained buffer which is automatically freed. They can be passed around and stored liked regular variables, and there is nothing special about their lifetime.

Objects of Atomic*ViewContext classes (i.e. objects returned by the atomicview() function) and Atomic*View objects obtained from said objects have a much stricter usage contract.

The buffer used to construct an Atomic*ViewContext object (either directly or through atomicview() ) MUST NOT be invalidated until .release() is called. This is aided by the fact that .release() is called automatically in .__exit__(...) and .__del__() . As long as you immediately use the context object in a with statement, and DO NOT invalidate the buffer inside that with scope, you will always be safe.

The protections implemented are shown in this example:

Furthermore, in CPython, all built-in types supporting the buffer protocol will throw a BufferError exception if you try to invalidate them while they're in use (i.e. before calling .release() ).

As a last resort, if you absolutely must invalidate the buffer inside the with context (where you can't call .release() ), you may call .__exit__(...) manually on the Atomic*ViewContext object. This is to force explicitness about something considered to be bad practice and dangerous.

Where it's allowed, .release() may be called multiple times with no ill-effects. This also applies to .__exit__(...) , which has no restrictions on where it can be called.

Different platforms may each have their own alignment requirements for atomic operations of given widths. This library provides the Alignment class in atomics to ensure that a given buffer meets these requirements.

If an atomic class is constructed from a misaligned buffer, the constructor will raise AlignmentError .

By default, .is_valid calls .is_valid_recommended . The class Alignment also exposes .is_valid_minimum . Currently, no atomic class makes use of the minimum alignment, so checking for it is pointless. Support for it will be added in a future release.

All Atomic* and Atomic*View classes have the following properties:

  • width : width in bytes of the underlying buffer (as if it were contiguous)
  • readonly : whether the object supports modifying operations
  • ops_supported : a sorted list of OpType enum values representing which operations are supported on the object

Integral Atomic* and Atomic*View classes also have the following property:

  • signed : whether arithmetic operations are signed or unsigned

In both cases, the behaviour on overflow is defined to wraparound.

Base Atomic and AtomicView objects (corresponding to ANY ) expose no atomic operations.

AtomicBytes and AtomicBytesView objects support the following operations:

  • [base] : load , store
  • [xchg] : exchange , cmpxchg_weak , cmpxchg_strong
  • [bitwise] : bit_test , bit_compl , bit_set , bit_reset
  • [binary] : bin_or , bin_xor , bin_and , bin_not
  • [binary] : bin_fetch_or , bin_fetch_xor , bin_fetch_and , bin_fetch_not

Integral Atomic* and Atomic*View classes additionally support the following operations:

  • [arithmetic] : add , sub , inc , dec , neg
  • [arithmetic] : fetch_add , fetch_sub , fetch_inc , fetch_dec , fetch_neg

The usage of (most of) these functions is modelled directly on the C++11 std::atomic implementation found here .

Compare Exchange ( cmpxchg_* )

The cmpxchg_* functions return CmpxchgResult . This has the attributes .success: bool which indicates whether the exchange took place, and .expected: T which holds the original value of the atomic object. The cmpxchg_weak function may fail spuriously, even if expected matches the actual value. It should be used as shown below:

In a real implementation of atomic_mul , care should be taken to ensure that desired fits in a (i.e. desired.bit_length() < (a.width * 8) , assuming 8 bits in a byte).

All operations can raise UnsupportedOperationException (so check .ops_supported if you need to be sure).

Operations load , store , and cmpxchg_* can raise MemoryOrderError if called with an invalid memory order. MemoryOrder enum values expose the functions is_valid_store_order() , is_valid_load_order() , and is_valid_fail_order() to check with.

AtomicBytes and AtomicBytesView implement the __bytes__ special method.

Integral Atomic* and Atomic*View classes implement the __int__ special method. They intentionally do not implement __index__ .

There is a notable lack of any classes implementing special methods corresponding to atomic operations; this is intentional. Assignment in Python is not available as a special method, and we do not want to encourage people to use other special methods with this class, lest it lead to them accidentally using assignment when they meant .store(...) .

The MemoryOrder enum class is provided in atomics , and the memory orders are directly copied from C++11's std::memory_order documentation found here , except for CONSUME (which would be pointless to expose in this library).

All operations have a default memory order, SEQ_CST . This will enforce sequential consistency, and essentially make your multi-threaded and/or multi-processed program be as correct as if it were to run in a single thread.

IF YOU DO NOT UNDERSTAND THE LINKED DOCUMENTATION, DO NOT USE YOUR OWN MEMORY ORDERS!!! Stick with the defaults to be safe. (And realistically, this is Python, you won't get a noticeable performance boost from using a more lax memory order).

The following helper functions are provided:

  • .is_valid_store_order() (for store op)
  • .is_valid_load_order() ( for load op)
  • .is_valid_fail_order() (for the fail ordering in cmpxchg_* ops)

Passing an invalid memory order to one of these ops will raise MemoryOrderError .

The following exceptions are available in atomics.exc :

  • AlignmentError
  • MemoryOrderError
  • UnsupportedWidthException
  • UnsupportedOperationException

IMPORTANT: Make sure you have the latest version of pip installed.

Using setup.py 's build or bdist_wheel commands will run the build_patomic command (which you can also run directly).

This clones the patomic library into a temporary directory, builds it, and then copies the shared library into atomics._clib .

This requires that git be installed on your system (a requirement of the GitPython module). You will also need an ANSI/C90 compliant C compiler (although ideally a more recent compiler should be used). CMake is also required but should be automatically pip install 'd if not available.

If you absolutely cannot get build_patomic to work, go to patomic , follow the instructions on building it (making sure to build the shared library version), and then copy-paste the shared library file into atomics._clib manually.

NOTE: Currently, the library builds a dummy extension in order to trick setuptools into building a non-purepython wheel. If you are ok with a purepython wheel, then feel free to remove the code for that from setup.py (at the bottom). Otherwise, you will need a C99 compliant C compiler, and probably the development libraries/headers for whichever version of Python you're using.

  • add docstrings
  • add support for minimum alignment
  • add support for constructing Atomic classes' buffers in shared memory
  • add support for passing Atomic objects to sub-processes and sub-interpreters
  • reimplement in C or Cython for performance gains (preliminary benchmarks put such implementations at 2x the speed of a raw int )

I don't have a guide for contributing yet. This section is here to make the following two points:

  • new operations must first be implemented in patomic before this library can be updated
  • new architectures, widths, and existing unsupported operations must be supported in patomic (no change required in this library)

Project details

Release history release notifications | rss feed.

Dec 10, 2021

Nov 15, 2021

Nov 10, 2021

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages .

Source Distribution

Uploaded Dec 10, 2021 Source

Built Distributions

Uploaded Dec 10, 2021 Python 3.10 Python 3.11 Python 3.6 Python 3.7 Python 3.8 Python 3.9 Windows x86-64

Uploaded Dec 10, 2021 Python 3.10 Python 3.11 Python 3.6 Python 3.7 Python 3.8 Python 3.9 Windows x86

Uploaded Dec 10, 2021 Python 3.10 Python 3.11 Python 3.6 Python 3.7 Python 3.8 Python 3.9 musllinux: musl 1.1+ x86-64

Uploaded Dec 10, 2021 Python 3.10 Python 3.11 Python 3.6 Python 3.7 Python 3.8 Python 3.9 musllinux: musl 1.1+ s390x

Uploaded Dec 10, 2021 Python 3.10 Python 3.11 Python 3.6 Python 3.7 Python 3.8 Python 3.9 musllinux: musl 1.1+ ppc64le

Uploaded Dec 10, 2021 Python 3.10 Python 3.11 Python 3.6 Python 3.7 Python 3.8 Python 3.9 musllinux: musl 1.1+ i686

Uploaded Dec 10, 2021 Python 3.10 Python 3.11 Python 3.6 Python 3.7 Python 3.8 Python 3.9 musllinux: musl 1.1+ ARM64

Uploaded Dec 10, 2021 Python 3.10 Python 3.11 Python 3.6 Python 3.7 Python 3.8 Python 3.9 manylinux: glibc 2.17+ s390x

Uploaded Dec 10, 2021 Python 3.10 Python 3.11 Python 3.6 Python 3.7 Python 3.8 Python 3.9 manylinux: glibc 2.17+ ppc64le

Uploaded Dec 10, 2021 Python 3.10 Python 3.11 Python 3.6 Python 3.7 Python 3.8 Python 3.9 manylinux: glibc 2.17+ ARM64

Uploaded Dec 10, 2021 Python 3.10 Python 3.11 Python 3.6 Python 3.7 Python 3.8 Python 3.9 manylinux: glibc 2.12+ x86-64 manylinux: glibc 2.5+ x86-64

Uploaded Dec 10, 2021 Python 3.10 Python 3.11 Python 3.6 Python 3.7 Python 3.8 Python 3.9 manylinux: glibc 2.12+ i686 manylinux: glibc 2.5+ i686

Uploaded Dec 10, 2021 Python 3.10 Python 3.11 Python 3.6 Python 3.7 Python 3.8 Python 3.9 macOS 10.9+ x86-64

Uploaded Dec 10, 2021 Python 3.10 Python 3.11 Python 3.6 Python 3.7 Python 3.8 Python 3.9 macOS 10.9+ universal2 (ARM64, x86-64)

Hashes for atomics-1.0.2.tar.gz

Hashes for atomics-1.0.2-py36.py37.py38.py39.py310.py311-none-win_amd64.whl, hashes for atomics-1.0.2-py36.py37.py38.py39.py310.py311-none-win32.whl, hashes for atomics-1.0.2-py36.py37.py38.py39.py310.py311-none-musllinux_1_1_x86_64.whl, hashes for atomics-1.0.2-py36.py37.py38.py39.py310.py311-none-musllinux_1_1_s390x.whl, hashes for atomics-1.0.2-py36.py37.py38.py39.py310.py311-none-musllinux_1_1_ppc64le.whl, hashes for atomics-1.0.2-py36.py37.py38.py39.py310.py311-none-musllinux_1_1_i686.whl, hashes for atomics-1.0.2-py36.py37.py38.py39.py310.py311-none-musllinux_1_1_aarch64.whl, hashes for atomics-1.0.2-py36.py37.py38.py39.py310.py311-none-manylinux_2_17_s390x.manylinux2014_s390x.whl, hashes for atomics-1.0.2-py36.py37.py38.py39.py310.py311-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl, hashes for atomics-1.0.2-py36.py37.py38.py39.py310.py311-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl, hashes for atomics-1.0.2-py36.py37.py38.py39.py310.py311-none-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl, hashes for atomics-1.0.2-py36.py37.py38.py39.py310.py311-none-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl, hashes for atomics-1.0.2-py36.py37.py38.py39.py310.py311-none-macosx_10_9_x86_64.whl, hashes for atomics-1.0.2-py36.py37.py38.py39.py310.py311-none-macosx_10_9_universal2.whl.

  • português (Brasil)

Supported by

python variable assignment atomic

IMAGES

  1. Variable Types in Python

    python variable assignment atomic

  2. Python tutorial 2019 #21 BOHR ATOMIC MODEL

    python variable assignment atomic

  3. Learn Python Programming Tutorial 4

    python variable assignment atomic

  4. GitHub

    python variable assignment atomic

  5. Day 146 : Python Program to Get details of an element by atomic number

    python variable assignment atomic

  6. Python Variables and Data Types

    python variable assignment atomic

VIDEO

  1. Chapter 3: Python Variable

  2. #Python

  3. Python Variables Demystified: Store, Change, and Manage Data with Ease

  4. What are Python Strings?

  5. Part 5 : Python Variables in Hindi

  6. Python Simple Variable Assignment

COMMENTS

  1. signals

    Simple assignment to simple variables is "atomic" AKA threadsafe (compound assignments such as += or assignments to items or attributes of objects need not be, but your example is a simple assignment to a simple, albeit global, variable, thus safe). If the handler does (e.g.) gvar = 3, gvar is initially 7, and the code outside the handler does ...

  2. python atomic data types

    a=1. b=1. c=2/2. d=12345. e=12345*1. a is b is true and a is c is also true but d is e is false ( == works normally as expected) Immutable objects are atomic the way that changing them is threadsafe because you do not actually change the object itself but just put a new reference in a variable (which is threadsafe).

  3. Thread Atomic Operations in Python

    Operations like assignment and adding values to a list or a dict in Python are atomic. In this tutorial you will discover thread atomic operations in Python. Let's get started. Atomic Operations An atomic operation is one or a sequence of code instructions that are completed without interruption. A program may be interrupted for one […]

  4. Which Python operations are atomic?

    The Python FAQ provides explanation and a full list of atomic operations, but the short answer is: The Python bytecode interpreter only switches between threads between bytecode instructions. The Global Interpreter Lock (GIL) only allows a single thread to execute at a time. Many operations translate to a single bytecode instruction.

  5. Python's Assignment Operator: Write Robust Assignments

    Here, variable represents a generic Python variable, while expression represents any Python object that you can provide as a concrete value—also known as a literal—or an expression that evaluates to a value. To execute an assignment statement like the above, Python runs the following steps: Evaluate the right-hand expression to produce a concrete value or object.

  6. PEP 583

    Python can't easily do this because we don't declare variables. This may or may not matter, since python locks aren't significantly more expensive than ordinary python code. If we want to get those tiers back, we could: Introduce a set of atomic types similar to Java's or C++'s . Unfortunately, we couldn't assign to them with =.

  7. Python

    Best Solution. Simple assignment to simple variables is "atomic" AKA threadsafe (compound assignments such as += or assignments to items or attributes of objects need not be, but your example is a simple assignment to a simple, albeit global, variable, thus safe).

  8. Getting Started with Data: Built-in Atomic Data Types

    We will begin our review by considering the atomic data types. Python has two main built-in numeric classes that implement the integer and floating point data types. ... A Python variable is created when a name is used for the first time on the left-hand side of an assignment statement. Assignment statements provide a way to associate a name ...

  9. Is Python variable assignment atomic?

    Answers: Simple assignment to simple variables is "atomic" AKA threadsafe (compound assignments such as += or assignments to items or attributes of objects need not be, but your example is a simple assignment to a simple, albeit global, variable, thus safe). Answered By: Alex Martelli. Compound assignment involves three steps: read-update ...

  10. GitHub

    atomicx. atomicx is an easy-to-use atomics library for Python, providing atomic integer and boolean operations. It allows you to perform atomic operations on shared variables, ensuring thread-safety and preventing race conditions in concurrent programming. Everything is entirely lock-free and is backed by Rust's atomic types.

  11. Variable Assignment

    Variable Assignment. Think of a variable as a name attached to a particular object. In Python, variables need not be declared or defined in advance, as is the case in many other programming languages. To create a variable, you just assign it a value and then start using it. Assignment is done with a single equals sign ( = ).

  12. Variables in Python

    To create a variable, you just assign it a value and then start using it. Assignment is done with a single equals sign ( = ): Python. >>> n = 300. This is read or interpreted as " n is assigned the value 300 .". Once this is done, n can be used in a statement or expression, and its value will be substituted: Python.

  13. Is Python variable assignment atomic?

    Yes, Python variable assignment is atomic. This means that each assignment operation is executed as a single, indivisible step and cannot be interrupted by ... PY. TOPICS . Popular topics: Python Using List Pandas String File Django Value-of Dataframe Function Numpy Converters Module Modulation Object All topics. ... Python-can. Class. Numbers ...

  14. atomicx · PyPI

    atomicx. atomicx is an easy-to-use atomics library for Python, providing atomic integer and boolean operations. It allows you to perform atomic operations on shared variables, ensuring thread-safety and preventing race conditions in concurrent programming. Everything is entirely lock-free and is backed by Rust's atomic types.

  15. 2.3: Input, Variables and Arithmetic Operations

    Now convert the variables atomic_number and atomic_weight to integer and floating decimals using the int() and float() functions. note, (figure \(\PageIndex{1}\)) shows how autocomplete can assist. ... Assignment: Write a Python program called Py_03_Molarity_Calculator, which tells you how many grams of a solute you need to make X ml of Y Molar ...

  16. GitHub

    atomics. This library implements a wrapper around the lower level patomic C library (which is provided as part of this library through the build_patomic command in setup.py ). It exposes hardware level lock-free (and address-free) atomic operations on a memory buffer, either internally allocated or externally provided, via a set of atomic classes.

  17. Variables and Assignment

    In Python there are restrictions on the name of variables. These include: Variable names must start with a letter or the underscore character _; Variable names can only contain alphanumeric characters and the underscore character (e.g. a-z, A-Z, 0-9 and _) Variable names are case sensitive (e.g x1 is a different name to X1 and variables with both names may exist at the same time)

  18. Python atomic access (related to threads)

    Python doesn't guarantee this code works every time. The solution is to use a lock to access active variable in both thread. It works because only one thread writes the variable. The other thread only reads it. Here it is another code: import threading. import random. import time. numbers = [3]

  19. atomics · PyPI

    Atomic lock-free primitives. atomics. This library implements a wrapper around the lower level patomic C library (which is provided as part of this library through the build_patomic command in setup.py). It exposes hardware level lock-free (and address-free) atomic operations on a memory buffer, either internally allocated or externally provided, via a set of atomic classes.

  20. Python boolean thread safe

    Python use the GIL (global interpreter lock) to prevent accesing objects at the same time from different threads. In CPython, the global interpreter lock, or GIL, is a mutex that protects access to Python objects, preventing multiple threads from executing Python bytecodes at once. The GIL prevents race conditions and ensures thread safety ...