The java driver is compiled by myself, and source version is 2.7.2-141.
With java driver's 2.7.2 and 2.7.3, it works well, for the skip() is java's default function.
in my code:
try{ Mongo mongo = new Mongo("localhost"); DB db = mongo.getDB("test"); GridFS gridFS = new GridFS(DB, "file"); File file = new File("d:/logs.txt"); // the file's size is about 200 byte. GridFSInputFile input = gridFS.createFile(file); input.save(): GridFSDBFile read = gridFS.find(new ObjectId(input.getId().toString())); try{ }catch(IOException e){ System.out.println(e.toString()); // here catch the exception: com.mongodb.MongoException: can't find a chunk! file id..... } }catch(Exception e){ System.out.println(e.toString()); }
When I run this sample code, a exception was throwed with error message:
com.mongodb.MongoException: can't find a chunk! file id.....
Then I debug in skip() function:
public long skip(long numBytesToSkip) throws IOException { if (numBytesToSkip <= 0) return 0; if (_currentChunkIdx == _numChunks) //We're actually skipping over the back end of the file, short-circuit here //Don't count those extra bytes to skip in with the return value return 0; if (_offset + numBytesToSkip <= _chunkSize) { //We're skipping over bytes in the current chunk, adjust the offset accordingly _offset += numBytesToSkip; if (_data == null && _currentChunkIdx < _numChunks) // if the _currentChunkIdx is -1, getChunk will throw exception _data = getChunk(_currentChunkIdx); return numBytesToSkip; } //We skipping over the remainder of this chunk, could do this less recursively... ++_currentChunkIdx; long skippedBytes = 0; if (_currentChunkIdx < _numChunks) skippedBytes = _chunkSize - _offset; else skippedBytes = _lastChunkSize; _offset = 0; _data = null; return skippedBytes + skip(numBytesToSkip - skippedBytes); }
In the line " _data = getChunk(_currentChunkIdx)", it throw exception
for _currentChunkIdx is -1.
So maybe this is a bug here.
- is related to
-
JAVA-332 GridFS: Allow seek/reads (get part of a chunk) on stream interface
- Closed