I believe I’ve finally settled on the proposal that works best for The Edge of Memory. But Holly Root’s post on the new Waxman Agency blog today reminded me of how the query process started for me and for several writing buddies. If there’s one thing that comes up over and over again when discussing proposals seeking representation, it’s how difficult it is to know what works and what doesn’t.
But it still comes down to a fundamental problem:
Many authors are willing to make changes to their proposals and manuscripts, but don’t know what needs to be changed. Many agents would be willing to make suggestions, but do not have the time and fear hostile responses to even the most constructive criticism.
So it occurred to me a while back that it might be possible to bring these two together so that everybody wins (Hey! You got chocolate in my peanut butter!).
In a subjective business like publishing, we have to rely on trends. To define a trend, we need data points. But to obtain data points from simple “yes” and “no” responses is difficult and slow. Let’s take a hypothetical example:
Author submits a proposal for “The Spoon That Moved” to Agent consisting of a query letter, a brief synopsis, and the first 5 pages. Agent sends rejection. Author only knows that the proposal didn’t work on Agent. Was it because Agent can’t stand stories about spoons? Was the query yawn-worthy? Did Agent read the query with excitement but the sample pages didn’t hold up? Did Agent actually love the proposal and seriously consider it before passing?
Author has no way of knowing. So she has two choices… submit the same proposal to someone else, or change the proposal. And she can’t be sure what to change. The process becomes a twisted game of Mastermind, where you never find out how you’re doing unless you happen to win.
Do we have the right query letter and synopsis, but the sample pages need work? Do we have all the right components but just on the wrong agent’s desk?
So… what if we embraced the Mastermind element?
Here’s my proposition… a standard rejection card WITH data points. Then, with only a handful of submissions, an author could identify a potential weak spot and fix it. The rejection card would take seconds to complete, and hopefully its standardness would ward off overly-emotional responses.
Here’s what I had in mind…
So what do we think? Helpful idea, or big pain in the butt?
Give your opinion in the comments!