Stop Waiting For Inspiration

I’m reading a fun book written and compiled by Mason Currey called Daily Rituals: How Artists Work. Although scholars might not fall into the category of “artists” in the conventional sense, there are many parallels between the nature of an artist’s work and the nature of a scholar’s work.

1. both are time consuming
2. both often go unrewarded for extended periods of time
3. both require the discipline of a self-starter
4. both require a kind of dedication that seems crazy to others
5. both are plagued by ideas of inspiration and revelation

I’m sure there are other similarities, but it’s the last one I want to talk about for a minute. What I like about Currey’s book is that he includes artists with incredibly different working methods. Some get up at dawn and work tirelessly until lunch before resting all afternoon. Some work through the middle of the night. Some rise at lunch and work in the afternoon then drink all evening. But there are also some common practices that nearly all abide by and some common philosophies that guide most artists approaches to their work. My favorite philosophy is that you shouldn’t wait for inspiration to strike, but rather just get to work.

While there are a few folks in the book who conceive of their work in terms of a series of “lightning strikes” (Arthur Miller), even most of these figures get/got up everyday and followed some kind of routine.

The American visual artist Chuck Close captures this philosophy best: “Inspiration is for amateurs. The rest of us just show up and get to work” (Daily Rituals 64).

In the daily grind of the grad student, the lecturer, or the junior professor this maxim is especially true. Study is a joy but it’s also a job. Reading is thrilling (most of the time!) but it is also a responsibility. Stop waiting for inspiration to strike because even if that’s the way your brain works, it’s more likely to strike when, like Arthur Miller, you “get up in the morning and … go out to [your] studio and … write.” Miller would then tear up his work again and again until something would stick before going on to follow that trail. But even the lightning strike was the result of a daily ritual.

Procrastination is a Poor Substitute

As I sat at my computer this morning tinkering with email and rechecking things I had already checked, I was distantly aware of the fact that I was procrastinating. So, instead of actually getting to work (I never have a lack of specific tasks, I’m a researcher/writer/teacher!) I pulled up one of my favorite blogs and read a post about procrastination, which I will not hyperlink here for the sole purpose of avoiding the unbelievable triple irony I’m trying to expose!

Instead of getting to work, I went and read a sincerely helpful blog post about strategies for getting to work. Ahh! The move wasn’t necessarily bad, but the impulse behind the move was 100% about not getting into the work I needed to do. And here’s why procrastination definitely feels good for the time you’re engaged in it: you feel like you’re accomplishing something, even if it’s something relatively trivial.

I’lll procrastinate by reading a blog post, checking email, grading a paper, or some other more enjoyable and measurable task. So I feel good for a minute because I get something done. But that feeling is very fleeting. Procrastination is a poor substitute for accomplishing a more serious objective.

What feels 10 times better is actually accomplishing something substantive on a larger ongoing project. In academia most of the work we do is long-term. It takes a while to be able to see the results of working on a journal article, conference paper, book, or teaching portfolio. But in the long run, those accomplishments are not fleeting.

So why don’t we just get amazing work done constantly? Mostly because we can’t feel the effects of that work as quickly as we can feel the effects of hitting “send,” or finishing a blog post on procrastination. When we do those things we feel as if we’ve accomplished something. Writing 400 words of an 8,000-word article doesn’t feel as rewarding.

That’s why it’s so important to break your larger tasks down into smaller tasks. Make every 750 words, or however you want to organize your work, count for something! Another strategy for countering procrastination is to set aside specific time in which procrastination is ok.

But if you’re struggling with procrastination today, then I suggest you join me in pulling up Google Timer, setting it for a short amount of time, say 10 minutes, and working on something you know you should be doing. You’ve got to start somewhere!

That Time of Year: A Brief Survival Guide

Whether you’re a graduate student coming up against seminar papers, a hopeful job marketeer, or a lecturer/jr. professor coming to terms with impending stacks of student essays, it’s that time of year. It’s that time of year that can only be described as a perfect storm of expectations, responsibilities, and obligations.

What are you going to do?

It’s ok to run and hide, but only for a little while. First, remember that you’ve finished countless semesters in the past. The day after the last day has always come, and you have always (hopefully) finished. You will finish again.

Yes, but HOW???? Here are four simple steps to get things going:

1. Make a list

What tasks must absolutely be accomplished by the end of the semester? Write them all down with no regard for the order, size, or difficulty of the tasks. Getting it all down on paper will help you see everything at once, and (again, hopefully) demonstrate that what lies before you is doable.

2. Prioritize the list

Once I’ve got all my necessary tasks down on paper, I usually prioritize these lists by date. What has to be done first, second, third…?

3. Guesstimate a timeline for each task

This objective is more difficult, but gets easier with time/experience. I now know, for instance, about how long it will take me to grade a stack of 25 papers from my survey course. I know roughly how much time I need to generate an abstract for a conference proposal.

4. Set a timer and tackle the first task

Sit down at your desk/workspace, set a timer (I use an online timer. Anything will do), and start on task #1. Don’t wait for anything? Why wait? If you run over on time, you have a decision to make:
     A. Continue on with the task until you’re finished
     B. Reset the timer and start the next task
What you decide to do will depend on your individual timeline leading up to the semester’s end. It will also depend on what your blocks of time look like. If, for example, you have a two-hour block of time tomorrow morning and you finish the first task in 35 minutes, but run up against teaching a class toward the end of the second task, you will have to decide whether to pick up where you left off, or to move on and come back to the unfinished task the next day.

This approach is driven by a time management philosophy that embraces the fact that you can only control things that are…well…within your control. You cannot control time, contrary to that movie with the hot tub, and you often cannot control your deadlines. But you can control how much time you allot these tasks, and the order in which you tackle them. Control what you can control!

I Didn’t Know That I Didn’t Know

The most common and unsettling feeling I had throughout graduate coursework, exams, and the dissertation was realizing that I did not understand why the good things I said and wrote were good.

It’s not that everything I said and wrote was good.

It’s that when I did say something good, or when professors got excited about an insight or argument of mine, it often turned out to be the case that they valued my work for reasons other than those I had in mind.

Maybe this is because of what Verlyn Klinkenborg says in his book Several short sentences about writing:

The central fact of your education is this:
You’ve been taught to believe that what you discover
by thinking,
By examining your own thoughts and perceptions,
Is unimportant and unauthorized.
As a result, you fear thinking,
And you don’t believe your thoughts are interesting,
Because you haven’t learned to be interested in them. (36)

Klinkenborg’s book is about writing, but his insights about all that goes into writing–noticing things, thinking about them, noticing that you’re thinking about things, thinking about noticing that you’re noticing things–suggest that these things have been deadened and killed off in most of us. But what has not been killed off in us, Klinkenborg says, is our ability to pick up on little disturbances in our prose, things that are just not quite right:

No one taught you to disregard these inner sensations.
No one taught you to be aware of them either.
No one even acknowledged that they exist.
You thought they weren’t significant–
Mainly because they were occurring within you.
And what do you know (you’re always tempted to ask)?
You know a lot, especially in a preconscious kind of
way. (53-54)

His articulation of that feeling “what do I know?” is right on. I don’t know how often I have chosen to continue a line of thinking or inquiry that I didn’t really care about because it seemed like the easiest path to thinking or writing in the way I was supposed to think or write.

To some extent, we have to learn and employ conventions in our academic writing. That’s how we join the community. But who says that the things we notice, analyze, and theorize about have to be determined by such narrow restrictions in the first place?

Part 4 – Journal Rankings in a Non-Sleazy Sense

Ok, so the end of the semester crunch created a sizable gap between the last post and this final post in my 4-part series on how to approach the phenomenon of journal rankings in the humanities without feeling like a mercenary sleazeball. Sorry for the layoff; you may want to breeze back through the previous 3 short posts on how I first became interested in this topic, the fundamentals of the h-index for individual scholars, and the function of the h5-index for journals.

The central idea of this post is that there is a non-sleazy way in which journal rankings can matter to the humanities scholar. That is, there is a sense in which you can be concerned with things like how high a particular journal is ranked, without feeling like, or literally being, someone who just wants to be a “rock star,” if such a thing is even possible in the humanities!

Here’s the idea: As scholars we are part of a community that is ideally committed to critical inquiry, the search for truth, and training others to become innovative and ethical thinkers in their own rights. And to be a part of any community there are certain ideas, documents, problems that we should all be familiar with. After all, the things we share in common are what make us a community, even if we disagree about those things. Thus, journal rankings can be a genuine path to gauging what are the ideas, documents, problems that our community is valuing, questioning, engaging at any moment.

Case study: My field is American literary studies. When I click on Google Scholar’s metrics and then click on “Humanities, Literature, & Arts” on the left-hand side, I can then open up the “subcategories” and click on “American Literature & Studies.” Doing so will reveal a list of journals ranked according to their h5-indexes, and I can see that American Literary History has both the highest h5-index and the highest h5-median in my field of professional study. Knowing that this journal is vital to my field, I click on it and see that Richard Gray’s 2009 article “Open Doors, Closed Minds: American Prose Writing At a Time of Crisis” is perhaps the most-cited article in my entire field. Don’t I have at least somewhat of an obligation to know what the article is about, if not to read it in its entirety?

So, my goal in understanding journal rankings is not simply to try to place my work in the most prestigious journals and “get my name out there,” but to be up-to-speed and engaged with good faith in my community. I have an ethical obligation to contribute and to listen to others. Now, does that mean that if you want to submit to American Literary History that you’re some kind of recognition-seeker in a sleazy way? NO!! Perhaps it just means that you want to engage with what our community has deemed some of the most important questions/ideas in the field right now. Perhaps you want to shape our field of study!

Of course, we could be more skeptical about the entire enterprise, but why not approach it as a meaningful way of understanding and contributing to our community?

Quantifying Research at the Journal Level: The h5-Index

Try this:

1. Go to Google Scholar
2. Click on metrics
3. Click on your preferred scholarly discipline on the left-hand side

Once you get to your chosen field or subfield, what you’ll find is a list of the top journals in that field ranked from highest to lowest in terms of their respective h5-indexes.

The h5-index works similarly, but not identically, to the individual h-index I discuss in the last post that you can use to quantify the work of an individual scholar. The difference is that a journal’s h5-index indicates the h-index for the entire journal over the last 5 years. An individual scholar’s h-index demonstrates the number of articles, h, with at least h number of citations. A journal’s h5-index demonstrates the largest number of articles, h, for the entire journal over the last 5 years that have been cited at least, h times.

So, if your favorite journal has an h5-index of 10, then that means the journal has published 10 articles over the last 5 years that have all been cited at least 10 times. Now, the most-cited article may have been cited 50 times, but the tenth most-cited article will only have been cited 10 times. Of course, we could lower the h5-index ranking to 8 and include more articles, but remember the h5-index is the largest number of articles from the journals over the last 5 years with the most possible citations.

The Google Scholar metrics thus rank journals in your field according to how much they are being cited. These metrics, then, give you a sense of which journals are having the greatest impact on your field as a whole, on individual scholars’ works, and thus on your formation as a knowledge worker.

So far the posts in this series have been purely informative, avoiding interpretation of these metrics as much as possible. Beginning with the next post, however, I want to offer some useful ways of thinking about this data that will be geared towards non-sleazy strategies for making this information work for you.

Quantifying Quality for the Individual Scholar

When I went on the job market last year I had an unsettling this-is-how-things-work realization: beyond writing a strong teaching statement I had no way to “prove” to hiring committees that I was a thoughtful and effective teacher. I could “prove” myself as a productive scholar to some extent by listing conference presentations and publications on my C.V., and I could quantify my service to the profession and my department by listing my contributions on the same document. I realized for better or worse that I needed to make my teaching more visible to those who would read my job documents because that’s all they would know about me. One way to do this with teaching was to try to win a teaching award, or some other commendation that I could include in my C.V.

[cool collage by Leo Reynolds available at Flikr Creative Commons]

This need to quantify made me feel a bit sleazy, like I was only teaching to win an award to get a job. That wasn’t at all the reality of my situation, but the way the profession works forced me at least to add that dimension to my thinking about becoming a serious member of the community.

The bottom line here is visibility. Knowledge workers are often required to render predominantly intangible aspects of their work tangible, or at least visible, to others both inside and outside their fields.

Back in 2005 a physicist named Jorge E. Hirsch recognized a problem similar to the one I’ve described above when he pointed out in an article in Proceedings of the National Academy of Sciences of the United States of America that, short of winning a Nobel Prize or some other highly-visible award, it is very difficult for a scientist to “quantify the cumulative impact and relevance” of her/his “research output” (see the first paragraph of the article linked above).

So, he proposed an index that would quantify both the impact and relevance of scholarship by tracking the number of articles a scholar publishes and the number of times the scholar’s articles have been cited in other articles. Or, rendered in Hirsch’s more technical language (the following comes directly from the article linked above as well):

“A scientist has index h if h of his or her Np papers have at least h citations each and the other (Np h) papers have ≤h citations each.”

In terms of practical application, Alan Marnett explains, “So we can ask ourselves, ‘Have I published one paper that’s been cited at least once?’  If so, we’ve got an H-index of one and we can move on to the next question, ‘Have I published two papers that have each been cited at least twice?’  If so, our score is 2 and we can continue to repeat this line of questioning until we can’t answer ‘yes’ anymore.”

What’s cool about the h-index (as it has come to be called) is that it depends entirely on the impact a researcher’s work has, and not on the perceived prestige of a particular journal. This is not to suggest that some journals are not prestigious for good reasons. In fact, in the next post I’ll address how this quantification manifests itself in terms of journals and how we can use this information to become better members of the scholarly community despite what may seem a necessary distastefulness inherent in this whole process.