Maybe I really need to check the API first before ask question here. This’s a so simple question if I had check the properties of ChannelBuffer in API, cause the property arrayOffset had already told me the answer of this question!
Well, for the sake of be more cautious and somebody of you guys need a clarification of my answer, so I have to make an explanation in details!
At first, I have to say that, I still don’t know how the ChannelBuffer package the data of image bytes array, that’s means I still don’t know why there’re should have the arrayOffset before the array data. Why we can’t just get data? Is there really some important reasons for the being of the arrayOffset? For the sake of safety, or efficiency? I don’t know, I can’t find an answer from the API. So I’m really tired of this question now, and I’m tired of whether you guys make a negative point of this question or not, just let it go!
Hark back to the subject, the problem can be solved in this way:
int offset = compressedImage.getData().arrayOffset();
Bitmap bmp = BitmapFactory.decodeByteArray(receivedImageBytes, offset, receivedImageBytes.length - offset);
OK! I’m still hope that someone who have good knowledge of this can tell me why, I’ll be so appreciate of this! If you guys are just tired of this question like me, so just let’s vote to close it, I’ll be appreciate it too! Thanks anyway!
solved Messages size of sensor_msgs.image/compressedImage type will changed when publish/subscriber in ROS-Android?