Tuesday, 7 July 2015

Correlation

Electron correlation has always been this intangible thing to me. You can talk about correlation energy, about multi-configurational wavefunctions, about differences between dynamic and static correlation. But what is actually going on? Let's tackle the problem with statistics! This is what we did in our new paper Statistical analysis of electronic excitation processes: Spatial location, compactness, charge transfer, and electron-hole correlation in J. Comp. Chem.


How do you quantify correlation? With a correlation coefficient! All you need is a function in two variables, then you can compute its covariance and normalize it by the standard deviations. A logical choice for such a function would be the 2-body density and given enough time and/or people doing it for me, I will look at that. For now we chose the 1-particle transition density matrix (1TDM) between the ground and excited state. This function describes the electron and hole quasi-particles in the exciton picture (see this post). And, among other things, we can compute the correlation coefficient between the electron and hole.

A good way to understand this new tool is in the case of symmetric dimers. Because of the symmetry all orbitals and states in such a system are delocalized over the whole system and no net charge transfer can be seen. But it is clear that the charge transfer states do not disappear: They are just arranged in symmetric linear combinations yielding the charge resonance states. On the other hand the local excitations are arranged in excitonic resonance states. Applying the new tool to this type of system shows that the difference is a correlation effect: Positive correlation yields bound excitonic states while negative correlation represents charge resonance

Friday, 19 June 2015

Color Charges

Should I boycott journals that charge money for color figures? Or at least send them only the papers that were rejected somewhere else? For a paper I would usually spend months researching a topic and probably another few weeks preparing graphics that allow a quick comprehension even for the hectic reader. What if during this process I find out that the best way to represent my results is by using a few colored lines? How can a journal editor in their right mind refuse to print those colored lines? It is not only out of respect for my work but also not to waste the time of any readers trying to decipher the greyscale figures that it looks like a clear decision to me.

The absurd thing is that color charges are usually given per image. You pay the same price for a full page color photo as for those one or two colored lines making a graph so much more comprehensible. I can see why a journal would not want to pay for the former. But I am sure it would be possible to add a few colored lines at an acceptable cost.

To answer the question from above: I am not going to boycott any journals. But the question of whether or not I agree with the publication process, certainly plays a role. Usually I have to choose a journal with the words Phys and Chem in it, arranged in arbitrary order and multiplicity. I could not tell you, which one has a higher impact factor. But I can tell you whether or not I like their publication policies. If you are interested, my favorite is J. Chem. Phys.: easy to use manuscript template, free color figures, reasonable copyright policies, and no nonsense.

Thursday, 21 May 2015

Twisted Intramolecular Charge Transfer

Admittedly, we are not the only people working on Dimethylaminobenzonitrile (DMABN) and its dual fluoresence. But it is an interesting system worth looking at. Our paper about this topic "Intramolecular Charge Transfer Excited State Processes in 4-(N,N-Dimethylamino)benzonitrile: The Role of Twisting and the πσ* State" is finally released after starting this project about four years ago. Check it out if you are interested.


Friday, 24 April 2015

The 4-Hour Scientist

Tim Ferris' 4-Hour Workweek is the ultimate treatise on how to work less. For most of us in science, eliminating work is not the utmost goal, as many of us like what we are doing. But there are still a number of things we can learn. First, making a dent into the cushion of your office chair is not an end in itself, and there is no reason why you should do this for eight hours every day. And as I would argue not even the number of papers is a good measure for personal success as a scientist. Anyway, whether your goals are to have more free time, to spend time with your family, to train for a sports event, or if you are really just trying to boost your publication list, here are some tips as inspired by the book.

Work less, think more. What we are supposed to do is science, after all.

Automatize. Repetitive things can be programmed. If you don't know how to, take a day off and learn basic bash and python.

Email less. Email is one of the worst time killers and procrastination excuses. Always finish an actual task first before you even think about opening your emails. Then close them really quickly, get back to work, and go home at 4 p.m.

Focus. Pick two tasks for the day, finish them and go home. Eliminate your "task switching costs" and do not give yourself any excuses to drift off.

Prioritize. Work and payoff are not linearly related. As the story goes, 20% of peapods in Pareto's garden produced 80% of the peas. And 80% of your output probably comes from 20% of your work. Find those 20% and eliminate the rest. On the other hand 80% of your sleepless nights may derive from just 20% of your projects - eliminate those as well.

I would argue that focusing on doing important things rather than many things will provide you with an advantageous scientific profile. And eliminating the hopeless tasks may even increase your output. And even if not, it is time to relax. If toiling all day, doing dull things while stressing yourself out is really the only way to get ahead in science, why would you want that?

But is there a moral obligation to work a lot? Am I somehow tricking my sponsors if I don't work "enough"? Well, the dent in my office chair is really not what counts.

Tuesday, 10 March 2015

Löwdin orthogonalization

Did you know that you can do a Löwdin orthogonalization by a singular value decomposition? Usually, when I hear Löwdin orthogonalization, I think of some weird S1/2 matrix, which scares me and I tend to stay away from it... But this pdf from the University of Oregon claims that you can do it in a different way. And it seems to work.

Say you have a matrix A and you want an orthogonal matrix that resembles it as closely as possible. What do you do? First you do a singular value decomposition of A:

Here U and V are orthogonal matrices and Λ is a diagonal matrix. We can now construct
which is an orthogonal matrix, since U and V are both orthogonal matrices. But even more, A' is the orthogonal matrix that best resembles A in the sense that for all orthogonal matrices Q it minimizes the distance with respect to the Frobenius norm

That is all you have to do.