The error is usualy right - you tried to insert a value too large for the column definition, so Oracle rejects the statement. Although it mentions insert, one can understand such error for updates, too.
But recently I got this issue for a delete statement. And no, there was not any trigger nor other advanced feature doing an insert in the background. So why on Earth, why?
The application (apply process of Dbvisit Replicate) was using OCI and the error indeed came from the OCIStmtExecute call. As it turns out, however, the devil is somewhere else.
One of the confusing topic in OCI is string handling - there are four ways how to pass a string to Oracle, depending on whether it's null-terminated or length-specified and whether it should be space-padded (think about CHAR/VARCHAR2 semantics):
- SQLT_CHR: varchar2-style, length-specified
- SQLT_STR: varchar2-style, null-terminated
- SQLT_AFC: char-style, length-specified
- SQLT_AFV: char-stype, null-terminated
- (and a few more for LONGs, RAWs and a different API for LOBs)
And yes, this is exactly the reason for the ORA-01041 - binding a 4000 bytes string to SQLT_AFC and setting OCI_ATTR_CHARSET_ID to OCI_UTF16ID did result in "inserted value too large for column" when the statement was executed...
Versions: Linux 64-bit, Oracle 10.2 and 11.2, using Oracle Instant Client 18.104.22.168.