So say you have a wrist worn accelerometer that detects movement in 3 axes in a binary way per second (i.e. for each second the device is worn, it detects if there is movement in the x, y and/or z axis). For 24h (86400 seconds) and 3 data points per second (a 1 or 0 for each axis, depending on whether there was movement in that axis or not), this means a lot of data. And usually you want to measure not only for a day but for several days or even weeks. A sampling rate of 1 Hz (1 data point per second) thereby is fairly low – in some constellations you would want to assess bouts of 0.5 seconds or even less – meaning even more data. This goes definitely beyond the usual excel based data analysis.
Therefore, in most cases the vendor providing the device will have a software solution where the data are integrated into specific pre-defined parameters, e.g. time spent moving. Usually the provider has validated these algorithms (be sure to check!) and have set certain thresholds to differentiate movement from artifacts.
So the good news is: instead of importing raw accelerometry data into your data base, you would receive integrated data. In the above example you would not have 86400 x 3 data sets, but one number for the time the subject spent in movement and one for the time spent standing still. This in turn means that you need to decide a priori what kind of data integration you want (e.g. time spent in movement over defined periods of time) and if the device-software-package actually can provide these integrations.
Of course there are also devices that do not need external software to integrate the data – the built-in software of the device does the job instead. But be aware that, in this case, you have no chance to define how the data is integrated – you have to accept what the device does. In many cases this may be sufficient; in other settings (scientific questions) it might be completely unsatisfactory.