It should matter which parts were written by AI or by officer. Once the officer signs off on the report, they take full responsibility for the content.
I can only assume you meant to write "shouldn't" instead of "should", but if you study human factors you'll discover that certain kinds of taking-shortcuts behavior are inevitable when dealing with humans. Speeding when we drive, for example. We know we are creating a material risk of getting pulled over and fined, but we just basically decide to ignore that risk because for most of us it is outweighed by the convenience (and real value) of getting everywhere we're going faster.
As always considering how a person would interact with an intern is surprisingly instructive to how they will form a working relationship with an non-sentient tool like a language model. You would expect them to give it a probationary experience to earn their trust after which if they are satisfied they will almost certainly express that trust by giving the tool a greater and greater degree of freedom with less active (and less critical) oversight.
It is not the initial state that worries me where the officers still mistrust a new technology and are vigilant of it. What worries me is the late-stage where they have learned to trust it (because it has learned to cover their asses correctly) and the AI itself actually ends up exercising power in human social structures because people have a surprising bias towards not speaking up when it would be safer to keep your head down and go with the flow, even when the flow is letting AI take operational control of society inch by inch
Taking your point with speeding, it is government policy for public transit bus drivers to speed +15MPH over the limit to stay up with traffic because it is safer, even though it is technically the government breaking the law.
You have the same worries as I regarding AI taking operational control which is likely inevitable at this point.
Yeah; Law Enforcement & the Judiciary is going to be an early flashpoint in the "a computer must never make a management decision" conflict. IMO: Its actually really important that these systems do not explicitly mark content as AI generated versus not, because if the operator is going to be held responsible for the final content, we can't allow repudiation of that content through an argument "well, that part was AI generated". Even if it isn't written with their voice; it should be reviewed and accepted by them, and at that point it doesn't matter if the AI writes it.
This is a contrived metaphor, but imagine some case report makes its way to a judge, and its missing some significant details about the case. But, its hand-written, and the officer argues that hand-writing is physically harder than typing, so of course hand-written reports wont be as comprehensive as typed ones. That argument is insane, partly because its an imperfect metaphor, but the line of logic is there.
There is no accountability behind that responsibility.
Cops are not held to account for lying now. Even when they are caught, 99% of the time the worst consequence for them is their testimony is ignored in court. They don't face professional repercussions in practice.
So even if officers are responsible for their reports they will still take the easy route and sign off on AI garbage. There is no downside, and it helps them meet pressure from their bosses and avoid the part of the job they hate the most.
Do you think there is a difference between a civilian driver ignoring the routine maintenance schedule for their car and a professional pilot ignoring the maintenance schedule for their plane?
That’s absolutely their choice, then. But if it turns out the AI wrote bullshit into the report, the officer that rubber stamped it must be held accountable for that, with no difference to a situation where they had written the bullshit themselves.
"Responsibility" doesn't help the person who goes to prison based on the "evidence" of the bullshit report that the jury decided to believe. Or the person who goes to prison when the bullshit report should have exonerated them, but didn't. Or in the other direction entirely a victim whose abuser goes free because of the bullshit in the report.
Ultimately accuracy in police reports is a high stakes endeavour, it's not a place to be putting bullshit generators.