I’ve begun the long overdue task of cleaning my room and organizing my belongings. I was never very organized during the school year, and when I moved to an off-campus apartment for the summer, I simply packed the mess into boxes and brought it with me. Now, I’m working to put pencils with other pencils, electronics cords with other cords, books with other books, and so on.
There’s a great deal of stuff that I’m choosing to keep with the full knowledge that I’ll probably never use or even look at these objects. They fall into two categories, roughly. The first are objects with sentimental value – cards, notes, gifts, doo-dads that remind me of a particular time or person, etc. But then there are they objects that don’t have any value to me whatsoever. For instance, I have a small plastic rhinoceros figurine that I found for free at a garage sale a year ago. It’s not very nice-looking, and I don’t have any memories associated with it. I don’t really have any reason to keep it – but at the same time, I don’t really have any reason to throw it away. It’s never done me wrong, and besides, maybe someone else will want it someday.
I think the choice of whether to keep it or part with it would be much easier if household objects could only be disposed of via garage sales, as opposed to being thrown away. That way, you would know that your object is at least going to be of use to someone else. But when the most convenient option for parting with an object is to throw it away into oblivion, it’s harder to justify the object’s disposal. I think this is because many objects, no matter how useless they appear to their owners, always have the potential to be valued. I think this can help to make sense of why many Americans hold on to the objects we usually call ‘junk’ – buttons, figurines, parts of other objects, etc. It also seems rational that, in a society where the two main choices for dealing with ‘junk’ are (1) to keep it and let it exist, and (2) complete destruction, that we would rather give objects a chance to have value to someone, rather than destroy the possibility altogether.
The ending scene of Toy Story 3 probably conveys this idea best. (Spoiler alert!)
I began studying for the LSAT last week. I’m currently working on the Analytical Reasoning problems, which the Princeton Review aptly calls “Games”. After all, the problems are just the sort of logic puzzles that one might do for pleasure on a rainy day. For anyone thinking about taking the LSAT in the future, do yourself a favor: take a logic class. Or two. If you take the time now to learn how to symbolize arguments, restrictions, and causal relationships, you’ll be well prepared when it comes time to take the LSAT. I am finding that the two semesters of logic I took are really paying off.
One satisfying aspect of the LSAT puzzles, or of any similar brainteasers, is that you know going into the problem that there is a correct answer. Even if you become frustrated and cannot finish, there’s usually an answer key that will, at the very least, ease your mind. But one of the things I love about logic is that the ‘logic’ part isn’t everything. For instance, take the following argument:
“If it is raining, I take my umbrella with me. I took my umbrella with me. Therefore, it is raining.”
This is a problematic argument:
A -> B (If it’s raining, then I take my umbrella.)
B (I take my umbrella.)
A (Therefore, it’s raining.)
Logic can tell us what we need to counter this argument. We need an instance where B can occur with A occuring. For instance:
- The umbrella protects my skin from the sun.
- I’m bringing the umbrella to the store to return it.
- I use the umbrella to fight off street criminals.
I can recall many times in my logic classes when other students were either afraid to propose such a scenario in class or were unable to do so. For me, coming up with these scenarios is one of the most fun parts of philosophy. Being able to imagine these sorts of examples seems crucial to reasoning and arguing. If all you have is logic, you can say that an argument form is invalid, but that alone won’t convince anyone around you that you’re correct (and you might have a hard time convincing yourself). If you have imagination, you can pull out a far-fetched, coherent counterexample and convince everyone in the room without ever invoking symbols or syllogisms.
Willy Wonka puts it best:
Come with me
And you’ll be
In a world of
Take a look
And you’ll see
Into your imagination
With a spin
The world of my creation
What we’ll see
…and as long as it doesn’t defy reason, you’ll be ready for any argument.
(Possible spoilers for Inception follow. Read at your own risk!)
I would highly recommend that you see Christopher Nolan’s Inception – ideally, this weekend, when the screen is big and the crowds are large. It’s a mind-bending film that shares a lot of similarities with Blade Runner, Waking Life, and the 1962 French short film La jetée. There’s a great deal to discuss about Inception from a philosophical angle, but one particular theme that stuck out was the idea of ‘true’ inspiration.
The main characters of Inception contend that it is obviously possible to steal ideas from a person’s subconscious, but that is unclear whether or not it is possible to implant an idea in another’s mind – that is, to ‘incept’ the idea – without this person realizing that he or she has been infiltrated. Apparently, an idea is not your own if it can be traced to an outside source. For instance, in one scene, Leonardo DiCaprio’s character Cobb tells the character Saito, “Don’t think about an elephant.” Saito replies that he cannot help but think of an elephant, but Cobb says that this idea of elephant isn’t really Saito’s. After all, Saito is only thinking of an elephant because Cobb forced Saito’s mind to do so.
But all of this raises the question of how true inspiration is even possible. Can an idea really just appear on its own? What sort of ideas could come to us in this way, free from perception and experience? There seem to be two possibilities. The stronger option is to grant that some ideas are present, or at least accessible, in the mind before experience (a priori). So an ‘inspired’ person isn’t really thinking of something new, but rather, plumbing the depths of the mind in a way that any person could. The second option is to admit that while the things we see and the way we feel is due to outside sources, inspiration can occur through the unique way that the mind organizes these variables. Thus, inspiration is still traceable mainly to the original thinker.
So I think this is behind what bothered me when one of the movie’s characters stated as a matter of fact that emotion is more powerful than reason when it comes to formulating a new idea. This seems false, considering that one could have great feelings of sadness or anger over something like a break-up without forming a good idea about how to overcome the loss. On the contrary, it’s when we take a step back and observe our situation in a rational way (alone or with the help of a friend) that we can plan a course of action to get through the toughest of times.
This is probably my one objection to Inception – that it treats inspiration is an emotional, subconscious phenomenon. There’s no doubt that emotion contributes to the forming of new ideas, but without the reasoning process present, all that emotional stuff is useless. Inception won’t work on a dummy.
Like many of my fellow college seniors, I am not sure what I’ll be doing one year from now. But in weighing my options, I’ve been struck by the fact that having to decide one’s own line of work must be an extremely recent phenomenon in human history. For the majority of humans historically, one likely did the same work as one’s parents, with the exception of some sort of apprenticeship or military draft. These exceptions, too, are forced choices.
It’s hard to imagine choosing one’s career path as anything but a modern phenomenon. Can you picture ‘career centers’ in Medieval Europe handing out copies of “What Colour Be Ye Parachute?” There’s no doubt that a few daring individuals throughout history have gone against the grain, perhaps running off to a monastery or joining a pirate crew. But for most, future plans were set from birth. This is certainly still the case today. Even if a person is one of the privileged few humans to have access to food and water, to be literate, and to be educated, many individuals head to school and study a particular subject because it’s expected of them by their parents.
Because many students are rightly caught up in the individual struggle to choose a career, I feel that we’ve neglected to fully examine this phenomenon of choice in general. Instead, a whole host of beliefs and intuitions about how to choose a job have emerged: “Just follow your heart,” “Do what you’re best at,” “Do what you love,” “Know that you’ll probably end up doing something completely different.” Many of the older adults I’ve spoken with have described their career paths using similar language, as if they had been tossed about by intuition and chance for years before landing somewhere stable.
But if we examine the history and characteristics of this choice in general, we may be better equipped to take on the challenge of choosing a career. For instance, it seems that one prevalent belief is to find a job that “makes you happy.” But we know that countless individuals throughout history had no choice in their livelihoods. Were any of them happy nevertheless? If so, how did they achieve happiness? If we can learn to find happiness without looking for it in a career, we could relieve a great deal of effort and stress that goes into finding the ‘perfect job’.
Learning more about the history of career-choice and how work relates to psychological well-being offers many benefits. And regardless of the benefits that studying this phenomenon could bring, a culture shift so recent and so rare deserves more attention.
For the past few weeks, I’ve been scoping out books from the library and from my own collection in an attempt to jump-start my brain. Today’s title, The Elements of Jurisprudence, was written in 1917 by Sir Thomas Erskine Holland. I bought it at a used bookstore in Ohio a few summers ago, partially for the subject matter, but moreso for its imposing gold print and aged brown covering. Out of its four-hundred pages (littered with legal quotations in Greek, Latin, French, and German), I read sixty-six before stopping. I’ve gone through a few dozen books like this over the course of the summer. More humiliating than failing to finish is returning the books to library, effectively canceling out any respect I gained by checking out a batch of seven books at once.
But it’s not that the books are bad; nor do I think it’s an internet-induced short attention span (I have finished a few books this summer). The problem is that instead of following my own questions and ideas concerning the subjects that I care about, I’ve only been looking to others for intellectual satisfaction. And of course this will be ineffective; is the desire to play music satisfied by listening to someone else play? With music, as with knowledge, it is easy to forget about process and only remember the end result. Whether it’s me or someone else playing “Happy Birthday” on piano means nothing to an observer, but it means everything to what sort of experience I have. Will I listen and be merely pleased? Or will I play, relying on concentration and creativity? The outcomes of the two situations may be objectively equal, but this does not entail equal experiences for the listener and the performer. Likewise, if I followed a course of reasoning for a year and happened to write Aristotle’s “Nicomachean Ethics” word for word, I would not have been just as well off reading the original. The process of creating offers something apart from its results.
Perhaps publishers ought to begin selling ‘great books’ with the pages blank. This would remind us that when we reach for an interesting title, we’re not always after great thoughts, but rather, the great challenge of thinking and creating. If you find yourself caught in a cycle of checking out books and returning them a few days later in defeat, ask yourself whether you’re looking to read, or if you’re actually looking to create something new. I did today, and that’s why I’ve started this blog.