Category Archives: Book Review

“Some Advice” on Monte Carlo from Landau and Binder

I was flipping through the fourth edition Landau and Binder’s excellent book on Monte Carlo for statistical physics and I came across this gem on p. 139:

We end this chapter by summarizing a few procedures which in our experience can be useful for reducing errors and making simulations studies more effective. These thoughts are quite general and widely applicable. While these ‘rules’ provide no ‘money-back’ guarantee that the results will be correct, they do provide a prudent guideline of steps to follow.

(1) In the very beginning, think.
What problem do you really want to solve and what method and strategy is best suited to the study. You may not always choose the best approach to begin with, but a little thought may reduce the number of false starts.

(2) In the beginning think small.
Work with small lattices and short runs. This is useful for obtaining rapid turnaround of results and for checking the correctness of a program. This also allows us to search rather rapidly through a wide range of parameter space to determine ranges with physically interesting behavior.

(3) Test the random number generator.
Find some limiting cases where accurate, or exact values of certain properties can be calculated, and compare your results of your algorithm with different random number sequences and/or different random number generators.

(4) Look at systematic variations with system size and run length.
Use a wide range of sizes and run lengths and then use scaling forms to analyze data.

(5) Calculate error bars.
Search for and estimate both statistical and systematic errors. This enables both you and other researchers to evaluate the correctness of the conclusions which are drawn from the data.

(6) Make a few very long runs.
Do this to ensure that there is not some hidden time scale which is much longer than anticipated.

Cover of the book Hiroshima by John Hersey

Book Review: Hiroshima

The discovery of nuclear weapons might be the most consequential discovery that physicists will ever make. If you disagree, you will certainly agree with my hope that this discovery does not become any more important. I believe physicists have a special responsibility to both understand the legacy of nuclear weapons and help society to prevent them from ever being used again.

Last September, I visited the Hiroshima Peace Memorial Museum, a deeply moving testament to the horrifying consequences of war. While I was there, I purchased this book. It’s short and excellent telling of the human impact of the bombing. I highly recommend it, especially for my fellow physicists.

HiroshimaHiroshima by John Hersey

My Goodreads rating: 5 of 5 stars

“A short and beautiful book focusing on the human tragedy of people affected by the atomic bombing of Hiroshima and the lives they built in the aftermath.”

Article recommendation: Do quantum spin liquids exist?

I recently read this phenomenal Physics Today article by Takashi Imai and Young Lee, “Do quantum spin liquids exist?” Physics Today 69, 8, 30 (2016). It’s a couple years old, but it’s a clear, relatively non-technical description of quantum spin liquids (QSL) and why they’re interesting along with a easy-to-follow history of developments in the field up to today. This provides some much needed clarity, especially since journal articles about QSL often use varying definitions of QSL.

Cover of the book "Rest"

Book Review: “Rest” by Alex Soojung-Kim Pang

Rest
by Alex Soojung-Kim Pang

A few months ago I had the pleasure of reading “Rest: Why You Get More Done When You Work Less” by Alex Soojung-Kim Pang. The core thesis of the book is that there is a limited amount of focused creative work that one can do each day, and that rest is an integral part of creative work. The book is a delight to read and (unlike many books in this genre) not overly long.

To prove his thesis, Pang relies on a combination of scientific studies relating rest and productivity as well as a collection of case studies of famous creative people including writers and scientists. As a scientist, I really appreciate that Pang correctly identifies scientific research as a fundamentally creative task, and seems especially fond of famous physicists.

The “resting” brain is not inactive. During rest the subconscious mind continues processing the ideas that the conscious mind was thinking about, but it does it in a different, freer way. This explains the often-reported phenomenon of getting your best ideas while you’re in the shower, or while out on a walk. Working more hours isn’t a guarantee of accomplishing more:

A survey of scientists’ working lives conducted in the early 1950s … graphed the number of hours faculty spent in the office against the number of articles they produced. … The data revealed an M-shaped curve. The curve rose steeply at first and peaked at between ten to twenty hours per week. The curve then turned downward. Scientists who spent twenty-five hours in the workplace were no more productive than those who spent five. Scientists working thirty-five hours a week were half as productive as their twenty-hours-a-week colleagues. From there, the curve rose again, but more modestly.

Across disciplines from science to writing to music, the limit for focused creative work seems to be 4-5 hours per day. A study of violin students at the Berlin Conservatory found that the best students weren’t those who practiced the most.

“Deliberate practice is an effortful activity that can be sustained only for a limited time each day.” Practice too little and you never become world-class. Practice too much, though, and you increase the odds of being struck down by injury, draining yourself mentally, or burning out. To succeed, students must “avoid exhaustion” and “limit practice to an amount from which they can completely recover on a daily or weekly basis.”

Pang also discusses the roles different kinds of breaks—detachment, deep play, sabbaticals—play in enhancing creativity. It’s worth noting that rest in this view need not be passive.

I’ve taken Pang’s message to heart and the results of my small uncontrolled study confirm his thesis. Over the past few months I have tried to make more time for rest. That has taken many forms. During the work day I make sure to take breaks for my mind to wander. Just a few minutes at a time, but it seems to help. I have also cut back on podcasts so I have more time with my thoughts. On the weekends, I find long bike rides very refreshing as a rare time where I am free from electronic distractions. As a result, I now feel more focused and present with the tasks that I am doing. I am spending a bit less time in the office, but I am getting much more science done.

My dissertation is now available online from Springer!

The cover of my dissertation as published by Springer

Earlier this year David Campbell nominated my dissertation for a Springer Thesis Award. I’m proud to say that my dissertation won and it is now available from Springer. My dissertation covers almost all of the research I did during my PhD, focusing on magnetic field effects on quantum antiferromagnets, specifically metamagnetism and deconfined quantum criticality. I’m especially proud of my introduction (Ch. 1), which I tried to make accessible to a relatively broad audience, and my methods chapter (Ch. 5), a detailed pedagogical guide to the numerical methods I used in my work.

In Chapter 1 I describe the historical and scientific context for both the study I have undertaken and the methods I have used to do it. In doing so, I tell the story of Dr. Arianna Wright Rosenbluth, the woman physicist who wrote the first-ever modern Monte Carlo algorithm in 1953. To my knowledge this is the most complete account of her life ever published.

Chapter 2 is a lightly edited version of my 2017 Phys. Rev. B paper on metamagnetism and zero-scale-factor universality in the 1D J-Q model. In Chapter 3 I discuss these same features in the 2D J-Q model. Most of Chapter 3 has been published in my 2018 Phys. Rev. B paper, but the Springer version includes an additional analysis where we look at an alternative form of the logarithmic corrections to the zero-scale-factor universality based on the 4D Ising universality.

In Chapter 4 I study the deconfined quantum critical point separating the Néel and VBS phases in the 2D J-Q model. Using a field, I force a nonzero density of magnetic excitations and show that their thermodynamic behavior is consistent with deconfined spinons (the fractional excitations predicted by deconfined quantum criticality). I also discuss a field-induced BKT transition and non-monotonic temperature dependence of magnetization, a little-known feature of this type of transition.

Finally, in Chapter 5  I provide a detailed pedagogical description of my methods focusing on stochastic series expansion quantum Monte Carlo and extensions thereof. Little in this chapter is my invention, but many of the details of these techniques have not been described in detail anywhere else in the literature (another resource is Sandvik’s excellent review article).

If you’re interested in using my dissertation, please let me know and I can send you a PDF!


Book Review: Weapons of Math Destruction

Title: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy
Author: Cathy O’Neil
My rating: 4.5/5

Many of my classmates from grad school have found jobs as data scientists, others have become Wall Street quants. PhD physicists are often hired for these data science/big data jobs because we have the statistical and computer programming skills for the job. As more and more quantitative (or otherwise computable) data becomes available, algorithms and data science are becoming an ever more important part of our lives. Rather than ask, “is this a good thing?”, perhaps the better question is “what are the downsides?”

This is exactly the question that O’Neil addresses in Weapons of Math Destruction: What happens when these algorithms go wrong? She defines WMDs by three qualities: opacity, scale and damage. Opacity refers to the secrecy surrounding the algorithm and underlying data, Scale is widespread use, and damage are the consequences when the algorithm goes wrong. She covers a wide array of examples in clear, nontechnical language.

One reason to use algorithms in place of human judgement is that humans have a well-established reputation for bias. A common misconception is that algorithms, because they are mathematical are free of bias. O’Neil points out that algorithms reflect the biases (and ignorance) of their creators and the limitations of the underlying data. Perhaps the most striking example here is sentencing algorithms, which attempt to replace biased human judgement with impartial mathematics. In practice, these algorithms reproduce the same racial biases because the data that feeds them–arrest records, zip codes, etc, are themselves full of racial bias.

O’Neil also provides an excellent analysis of the effects of algorithms on our public discourse, where they enable microtargeting: delivering different messages to different potential voters based on detailed electronic dossiers of each. This tool is deliberately opaque, allowing campaigns to “pinpoint vulnerable voters and target them with fear-mongering campaigns… At the same time, they can keep those ads away from the eyes of voters likely to be turned off (or even disgusted) by such messaging”.

Algorithms aren’t going anywhere. We are steaming full speed towards a future where machines increasingly supplement and even supplant human judgement in vast areas of our lives, from hiring decisions to driving. This era is full of both promise and peril. Thus, it is essential understand the dangers of weapons of math destruction and how we can protect ourselves from them. O’Neil is remarkably successful in addressing both of these questions and she manages to do so without resorting to technical language. This book is essentially the algorithm analog to Daniel Kahneman’s excellent catalog of the failures of human judgement, Thinking Fast and Slow. Weapons of Math Destruction is essential reading for anyone living in the modern era, but especially scientists seeking to apply their mathematical tools outside of their discipline.

Find it on: Goodreads, or Amazon

What is condensed matter physics?

Below is a lightly-edited excerpt from Ch. 1 of my dissertation in which I describe my field in the broadest possible terms. My dissertation is currently in production for publication in the “Springer Theses” series.


This dissertation is in the field of condensed matter physics, which in the most informal sense possible, could be described as ‘the study of stuff that is not especially hot nor moving especially fast’  [1]. A more formal (but no less vague) definition is ‘the study of the behavior of large collections of interacting particles’ [2]. The haziness of this definition is appropriate since condensed matter is a very broad field encompassing the study of almost all everyday matter including liquids, solids and gels as well as exotic matter like superconductors. Condensed matter physics is a tool for answering questions like: Why are some materials liquids? Why are others magnetic? What sorts of materials make good conductors of electricity? Why are ceramics brittle? Our understanding of condensed matter physics underlies much of modern technology; some prominent examples include ultra-precise atomic clocks, transistors [3], lasers, and both the superconducting magnets and the superconducting magnetometers used for magnetic resonance imaging (MRI). Condensed matter physics overlaps with the fields of magnetism, optics, materials science and solid state physics.

Condensed matter physics is concerned with the behavior of large collections of particles. These particles are easy to define: they will sometimes be atoms or molecules and occasionally electrons and nuclei; condensed matter is almost never concerned with any behavior at higher energy scales (i.e. no need to worry about quarks). The key word in the definition is large. Atoms are very small, so any macroscopic amount of matter has a huge number of them, somewhere around Avogadro’s number: 1023. Large ensembles of particles display emergent phenomena that are not obvious consequences of underlying laws that govern the behavior of their microscopic components. In the words of P.W. Anderson:

The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe. …hierarchy does not imply that science X is “just applied Y.” At each stage entirely new laws, concepts, and generalizations are necessary, requiring inspiration and creativity to just as great a degree as in the previous one. Psychology is not applied biology, nor is biology applied chemistry. [4]

Emergent phenomena are not merely difficult to predict from the underlying microscopic laws, but they are effectively unrelated. At the most extreme scale, no one would argue that consciousness is somehow a property of standard model particles, or that democracy is a state that could ever be described in terms of quantum field theory. Here I will focus on two such emergent phenomena: phase transitions, where symmetries of the underlying laws are spontaneously violated and behavior is independent of microscopic details, and quasiparticles, an almost infinite variety of excitations of many-body states of matter that bear no resemblance to the ‘real’ particles that make up the matter itself [5].

The most interesting problems tend to involve systems with interactions. To highlight the importance of interactions, let us first consider the case of noninteracting particles. The canonical example here is the ideal gas, which is composed of classical point-like particles that do not interact with each other. Because they do not interact, the motion of the particles is independent; if we want to know the energy of any particle, it is easy to calculate from its speed (E = mv22). The behavior of the whole system can be described by an ensemble of independent single particles. When the particles are interacting things are very different. Instead of an ideal gas, let us consider a gas of classical electrons interacting via the Coulomb force 1∕r. For two electrons the equations of motion can be solved analytically, but in a solid there are 1023 electrons (for all practical purposes, we can round 1023 up to infinity). To write down the energy of of one of them, we must account for the position of every single other electron. Thus the energy of just one electron is a function of 3N variables. Even with just three particles, analytic (pen and paper) solutions are impossible in most cases. An analytic solution for the motion of 1023 electrons is impossible, and “it’s not clear that such a solution, if it existed, would be useful” [6]. This is many-body physics. Instead of following individual particles, we describe their collective motion and the resulting emergent phenomena such as quasiparticles and phase transitions. Consider waves crashing on the beach. It would be foolish to try to understand this phenomenon by following the motion of all the individual water molecules. Instead, we can treat the water as a continuous substance with some emergent properties like density and viscosity. We can then study the waves as excitations the ground state of the water (the state without waves).

[1] This definition distinguishes condensed matter from particle physics (the other broad subdiscipline of physics), which is the ‘study of really hot and really fast-moving objects.’
[2] In practice, condensed matter tends to be the term used to describe physics that does not fit into one of the smaller, more well-defined subdisciplines like high-energy physics or cosmology.
[3] Both transistors and atomic clocks are essential to cellular telephones and satellite navigation systems like GPS.
[4]This quote is taken from “More is different” Science 177, 393 (1972) by P.W. Anderson , an excellent refutation of reductionism and discussion of emergent phenomena written in a manner that should be accessible to non-physicists.
[5] I hope to post non-technical descriptions of phase transitions and quasiparticles at some point in the future.
[6] Chaikin and Lubensky, 1998, p. 1

Book Review: A Guide to Writing for Scientists

Title: A Guide to Writing for Scientists: How to write more easily and effectively throughout your scientific career

cover of book

Author: Stephen B. Heard

Heard has produced an excellent guide to scientific writing that despite its 300 pages, is a pleasure to read. He addresses a huge array of issues that affect scientific writing and manages to do so in a manner that seems to apply well to all scientific writing. Given the gap between Heard’s field and my own, I would say that is a notable accomplishment. In addition to style, peer review, and other issues, Heard offers excellent advice on the process of writing and how one can become a more productive writer.

I highly recommend this book to any scientist. This is a great book to read a little as a time as you sit down to write you next paper. English as an additional language scientists may be especially interested in the chapter on writing in English for non-native speakers.

Find it on: Goodreads, or Amazon