I have a table with many fields. According to this page , the error is due to the fact that in one of the fields I am trying to insert data that exceeds the maximum length established in the table for this field. To solve it, it is recommended to identify the field (or fields) and correct the size allowed in the table. But because the table in question is very large I have spent more than 12 hours trying to find the guilty field.
The method I am using is the following:
I did one SELECT
in which I specified field by field an CAST
equal to the one in the table and I launched the query for each field until I received the same error that I get when using the INSERT
in all the fields.
Is there an effective way to identify the responsible field?
If we think of a way that is flexible enough to solve the problem in the multiple ways of obtaining a recordset, that is, from a table, or from multiple, with combined columns, or dynamic queries or SPs or Functions, or through linked servers. The answer is no.
I can yes, in any way, propose some possible methodologies:
Try and failure
It is the most flexible and tedious mechanic, you have a query that returns 10 columns, and that triggers an error of this type. The procedure is "divide and conquer", that is, you repeat the query but only inserting the first 5 columns, if you get an error, you repeat the procedure reducing the columns by half, if you don't get an error, you try with half of the next 5 columns and so on until the problem column is isolated.
Select * into
Some ways of getting a recordset allow you to do a
select * into <tabla temporal>
, which basically involves dynamically creating the table. For example, suppose this query is sure to give an error:Obviously you could have many more columns, the idea, however, is to insert this query into an automatically created table, and then yes, check the lengths. We take the original query and insert it directly into a temporary session:
And now we simply check the maximum lengths of each column:
And obviously we will have to compare these maximum values with those of the final table where we want to insert the data.
You could do something like this:
I used the following table for this example:
This query shows for each column of the target table the name, the data type, the maximum length, the precision, the scale, if it can be null, etc (you can add other information that I omitted in this case). destination_table_name would be the name of your destination table (where you are going to insert the data). It will help you to compare later with the information you get from the source table.
Departure:
Then from the previous query you can automatically generate queries on the source table.
For example: with this query you create a query (or you can create several) which you are going to execute later. In this case each varchar column of the table corresponds to a row, and also shows the length of the longest string in that column.
The output of this query is:
So you copy all those rows as a single query, replace the last UNION with a semicolon, and run it:
The result would be this:
Returns for each varchar field of the source table the length of the longest string it contains.
So you can generate different combinations according to what you need.