Hey I couldn't find an answer for this so thought I'd sign up! As you can guess, i've been trying out the method in this video: https://www.youtube.com/watch?v=IhuURsu0jdI A series of formulas for extracting unique data, compiling it into a list, and then using this list to make a dynamicly updating DV List. It works just great. The trouble is, when it is applied to large data sets, it seriously ties up the cpu. My question is how can I reduce or eliminate this problem? What elements can be altered or removed to improve performance? The below formula extracts a list of unique names in sequence. From there it's fairly simple to make the DV List. =IF(ROWS(B$13:B13)>$C$13,"",INDEX(User_Name_Extract, SMALL(IF(FREQUENCY(IF(User_Name_Extract<>"", MATCH(User_Name_Extract,User_Name_Extract&"",0)), ROW(User_Name_Extract)-ROW('Data Source'!$A$2)+1),ROW(User_Name_Extract)-ROW('Data Source'!$A$2)+1),ROWS(B$13:B13)))) $C$13 contains a unique count formula. B$13:B13 is the position on the list (so if there were only 10 unique names you wanted to extract, you could get away with dragging it to B$13:B23). "User_Name_Extract" is the range, which in my case is 32000 rows xD You can probably get a clearer understanding from the video than I can give. Is there any solution when applying this to such large datasets? And what particularly is causing the trouble? Any help would be grand
This question generated 17 answers. To proceed to the answers, click here.
This thread is current as of October 31, 2014.
For more resources for Microsoft Excel:
This article includes the following tags:
- Microsoft Excel