A few days ago, I completed a survey about some proposition the California legislature was looking at placing on the ballet. The idea seemed to be good - require the legislature to actually create a budget and not go willy-nilly with spending when money rolls in. However, the legislature is already supposed to be doing this, so the proposition seemed pretty pointless (especially since there are already so many propositions hand-tying the legislature.)
The questions seemed to be trying to steer opinion in a certain direction. Many questions were simply a dichotomy between "pass budget" and "create jobs". Uggh, I think there are other root problems facing California - like a runaway initiative process and legislature with firebrands from the left and the right.
I seemed to be going in their direction for most of the leading questions, but then said I would vote against the proposition. So the survey went on and on and on.
A few days later, I got another call from a surveyor. He started the questions, and the first few sounded familiar. I asked him if it was about that proposition. He skirted the answer with generalities. There were more questions. It was obvious it was the same one. I told him I had taken it before and what my answers would be. He said he had to read everything verbatim.
He also thought there may have been a server crash causing them to have to do it again. I didn't have time or desire to repeat the process, so I left him there.
I wonder what the impacts of this "server crash" are? Will they lose good information? Will repeats make up totally different answers? (This could be something interesting to compare.) Are are they just clumsy and managed to lose all that data, leading them to bill the legislature double for the extra surveys? (Or do they just eat the costs?)
Technology and politics. A match made for each other.
No comments:
Post a Comment