I use read.csv.ffdf from ff package to load a 830MB CSV file, which is about 8800000 rows and 19 columns:
library(ff) library(ffbase) green_2018_ff <- read.csv.ffdf("green_2018.csv", header = TRUE) But when I check the the size of green_2018_ff using object_size from pryr package, the object is about 1.13GB in memory:
library(pryr) object_size(green_2018_ff) #1.13GB I used to consider that the ffdf is only a memory mapping object, it should be very small in memory, much smaller than the origin CSV. Is there anything wrong with my code or data? Thanks.
https://stackoverflow.com/questions/65929763/why-the-ffdf-object-is-so-large January 28, 2021 at 09:28AM
没有评论:
发表评论