Feature/globebrowsing optimization (#310)

* Simplest possible PBO implementation.

* Add PBO class

* TileLoadJob owns raw tile data

* Working on a soluton to cache textures and reuse them

* PBO and cached textures working for one texture type. Color textures.

* Threadpool for tile requests uses LRU cache as queue

* Remove framesUntilRequestFlush

* Clean up

* Clean up

* Use prioritizing concurrent job manager

* Use TileTextureInitData to initialize RawTileDataReader.

* Class TextureContainer owns the textures to use for tiles.

* Using TileTextureInitData to determine if new caches need to be created.

* Remove WriteDataDescription

* Remove TileDataLayout

* Rendering many different layer types again

* TileProviderByLevel gives layergroup id to tile providers

* Comment away use of PBO

* Erase unfinished requests to make room for new ones

* Enable choice of PBO or not.

* Enable resetting of asynctiledataprovider

* Add the ability to use PBO and also load to CPU

* Update ghoul

* Solve culling issue.

* Texture pointer of Tile is now a raw pointer. Currently break single image tile provider and text tile provider.

* Add gpudata

* Move fetching of shader preprocessing data to LayerManager

* No comparisons to determine shader recompilation.

* Show the tile cache size in the GUI

* Clean up and comment.

* Solve bug where float is interpreted as NaN

* Enable ability to blend between layers again

* Fix single image provider

* Fix windows build error

* Fix OSX compile issue.

* Some clean up

* Showing correct texture data size

* Enable use of text tile providers again. No backgroupd image path however.

* Change cache size from GUI

* Clean up

* Solve osx compilation error.

* Update ghoul

* Make it possible to switch between PBO and not during runtime.

* Enable resetting of tile datasets

* change function module in moduleengine to identify module by name

* MemoryAwareTileCache is no longer a singleton

* Update ownership of properties for globe browsing

* Logging info about resetting tile reader.

* Logging info

* Fix requested changes

* Fix some compile warnings.

* Fix compilation warnings

* Add ability to blend values with blend parameter. Also define settings through lua dict.

* Fix some comments on pull request.

* Change formatting

* Change formatting

* Change formatting

* Fix pull request comments.

* Those are details

* Make Mercury great again.

* Make Earth great again.

* Solve conflict

* Test to sometimes use valueblending and sometimes not

* Not always use value blending

* Update ghoul

* Change from auto to explicit type.

* Update test for LRU Cache

* Include algorithm.
This commit is contained in:
Kalle Bladin
2017-05-30 15:37:05 +02:00
committed by GitHub
parent f6da2b6472
commit f51f293989
128 changed files with 3039 additions and 1930 deletions
+27 -4
View File
@@ -36,26 +36,49 @@ namespace cache {
* Templated class implementing a Least-Recently-Used Cache.
* <code>KeyType</code> needs to be an enumerable type.
*/
template<typename KeyType, typename ValueType>
template<typename KeyType, typename ValueType, typename HasherType>
class LRUCache {
public:
using Item = std::pair<KeyType, ValueType>;
using Items = std::list<Item>;
/**
* \param size is the maximum size of the cache given in number of cached items.
*/
LRUCache(size_t size);
void put(const KeyType& key, const ValueType& value);
std::vector<Item> putAndFetchPopped(const KeyType& key, const ValueType& value);
void clear();
bool exist(const KeyType& key) const;
/**
* If value exists, the value is bumped to the front of the queue.
* \returns true if value of this key exists.
*/
bool touch(const KeyType& key);
bool isEmpty() const;
ValueType get(const KeyType& key);
/**
* Pops the front of the queue.
*/
Item popMRU();
/**
* Pops the back of the queue.
*/
Item popLRU();
size_t size() const;
private:
void putWithoutCleaning(const KeyType& key, const ValueType& value);
void clean();
std::vector<Item> cleanAndFetchPopped();
std::list<std::pair<KeyType, ValueType>> _itemList;
std::unordered_map<KeyType, decltype(_itemList.begin())> _itemMap;
size_t _cacheSize;
Items _itemList;
std::unordered_map<KeyType, typename Items::const_iterator, HasherType> _itemMap;
size_t _maximumCacheSize;
};
} // namespace cache
+106 -27
View File
@@ -28,36 +28,63 @@ namespace openspace {
namespace globebrowsing {
namespace cache {
template<typename KeyType, typename ValueType>
LRUCache<KeyType, ValueType>::LRUCache(size_t size)
: _cacheSize(size)
template<typename KeyType, typename ValueType, typename HasherType>
LRUCache<KeyType, ValueType, HasherType>::LRUCache(size_t size)
: _maximumCacheSize(size)
{}
template<typename KeyType, typename ValueType>
void LRUCache<KeyType, ValueType>::clear() {
_itemList.erase(_itemList.begin(), _itemList.end());
_itemMap.erase(_itemMap.begin(), _itemMap.end());
template<typename KeyType, typename ValueType, typename HasherType>
void LRUCache<KeyType, ValueType, HasherType>::clear() {
_itemList.clear();
_itemMap.clear();
}
template<typename KeyType, typename ValueType>
void LRUCache<KeyType, ValueType>::put(const KeyType& key, const ValueType& value) {
auto it = _itemMap.find(key);
if (it != _itemMap.end()) {
_itemList.erase(it->second);
_itemMap.erase(it);
}
_itemList.push_front(std::make_pair(key, value));
_itemMap.insert(std::make_pair(key, _itemList.begin()));
template<typename KeyType, typename ValueType, typename HasherType>
void LRUCache<KeyType, ValueType, HasherType>::put(const KeyType& key,
const ValueType& value)
{
putWithoutCleaning(key, value);
clean();
}
template<typename KeyType, typename ValueType>
bool LRUCache<KeyType, ValueType>::exist(const KeyType& key) const {
template<typename KeyType, typename ValueType, typename HasherType>
std::vector<std::pair<KeyType, ValueType>>
LRUCache<KeyType, ValueType, HasherType>::putAndFetchPopped(const KeyType& key,
const ValueType& value)
{
putWithoutCleaning(key, value);
return cleanAndFetchPopped();
}
template<typename KeyType, typename ValueType, typename HasherType>
bool LRUCache<KeyType, ValueType, HasherType>::exist(const KeyType& key) const {
return _itemMap.count(key) > 0;
}
template<typename KeyType, typename ValueType>
ValueType LRUCache<KeyType, ValueType>::get(const KeyType& key) {
template<typename KeyType, typename ValueType, typename HasherType>
bool LRUCache<KeyType, ValueType, HasherType>::touch(const KeyType& key) {
auto it = _itemMap.find(key);
if (it != _itemMap.end()) { // Found in cache
ValueType value = it->second->second;
// Remove from current position
_itemList.erase(it->second);
_itemMap.erase(it);
// Bump to front
_itemList.emplace_front(key, value);
_itemMap.emplace(key, _itemList.begin());
return true;
} else {
return false;
}
}
template<typename KeyType, typename ValueType, typename HasherType>
bool LRUCache<KeyType, ValueType, HasherType>::isEmpty() const {
return _itemMap.size() == 0;
}
template<typename KeyType, typename ValueType, typename HasherType>
ValueType LRUCache<KeyType, ValueType, HasherType>::get(const KeyType& key) {
//ghoul_assert(exist(key), "Key " << key << " must exist");
auto it = _itemMap.find(key);
// Move list iterator pointing to value
@@ -65,20 +92,72 @@ ValueType LRUCache<KeyType, ValueType>::get(const KeyType& key) {
return it->second->second;
}
template<typename KeyType, typename ValueType>
size_t LRUCache<KeyType, ValueType>::size() const {
template<typename KeyType, typename ValueType, typename HasherType>
std::pair<KeyType, ValueType> LRUCache<KeyType, ValueType, HasherType>::popMRU() {
ghoul_assert(_itemList.size() > 0,
"Can not pop from LRU cache. Ensure cache is not empty.");
auto first_it = _itemList.begin();
_itemMap.erase(first_it->first);
std::pair<KeyType, ValueType> toReturn = _itemList.front();
_itemList.pop_front();
return toReturn;
}
template<typename KeyType, typename ValueType, typename HasherType>
std::pair<KeyType, ValueType> LRUCache<KeyType, ValueType, HasherType>::popLRU() {
ghoul_assert(_itemList.size() > 0,
"Can not pop from LRU cache. Ensure cache is not empty.");
auto lastIt = _itemList.end();
lastIt--;
_itemMap.erase(lastIt->first);
std::pair<KeyType, ValueType> toReturn = _itemList.back();
_itemList.pop_back();
return toReturn;
}
template<typename KeyType, typename ValueType, typename HasherType>
size_t LRUCache<KeyType, ValueType, HasherType>::size() const {
return _itemMap.size();
}
template<typename KeyType, typename ValueType>
void LRUCache<KeyType, ValueType>::clean() {
while (_itemMap.size() > _cacheSize) {
auto last_it = _itemList.end(); last_it--;
_itemMap.erase(last_it->first);
template<typename KeyType, typename ValueType, typename HasherType>
void LRUCache<KeyType, ValueType, HasherType>::putWithoutCleaning(const KeyType& key,
const ValueType& value)
{
auto it = _itemMap.find(key);
if (it != _itemMap.end()) {
_itemList.erase(it->second);
_itemMap.erase(it);
}
_itemList.emplace_front(key, value);
_itemMap.emplace(key, _itemList.begin());
}
template<typename KeyType, typename ValueType, typename HasherType>
void LRUCache<KeyType, ValueType, HasherType>::clean() {
while (_itemMap.size() > _maximumCacheSize) {
auto lastIt = _itemList.end();
lastIt--;
_itemMap.erase(lastIt->first);
_itemList.pop_back();
}
}
template<typename KeyType, typename ValueType, typename HasherType>
std::vector<std::pair<KeyType, ValueType>>
LRUCache<KeyType, ValueType, HasherType>::cleanAndFetchPopped()
{
std::vector<std::pair<KeyType, ValueType>> toReturn;
while (_itemMap.size() > _maximumCacheSize) {
auto lastIt = _itemList.end();
lastIt--;
_itemMap.erase(lastIt->first);
toReturn.push_back(_itemList.back());
_itemList.pop_back();
}
return toReturn;
}
} // namespace cache
} // namespace globebrowsing
} // namespace openspace
-100
View File
@@ -1,100 +0,0 @@
/*****************************************************************************************
* *
* OpenSpace *
* *
* Copyright (c) 2014-2017 *
* *
* Permission is hereby granted, free of charge, to any person obtaining a copy of this *
* software and associated documentation files (the "Software"), to deal in the Software *
* without restriction, including without limitation the rights to use, copy, modify, *
* merge, publish, distribute, sublicense, and/or sell copies of the Software, and to *
* permit persons to whom the Software is furnished to do so, subject to the following *
* conditions: *
* *
* The above copyright notice and this permission notice shall be included in all copies *
* or substantial portions of the Software. *
* *
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, *
* INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A *
* PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT *
* HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF *
* CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE *
* OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. *
****************************************************************************************/
#include <ghoul/misc/assert.h>
namespace openspace {
namespace globebrowsing {
namespace cache {
template<typename KeyType, typename ValueType, typename HasherType>
MemoryAwareLRUCache<KeyType, ValueType, HasherType>::MemoryAwareLRUCache(size_t maximumSize)
: _maximumCacheSize(maximumSize)
, _cacheSize(0)
{}
template<typename KeyType, typename ValueType, typename HasherType>
void MemoryAwareLRUCache<KeyType, ValueType, HasherType>::clear() {
_itemList.clear();
_itemMap.clear();
_cacheSize = 0;
}
template<typename KeyType, typename ValueType, typename HasherType>
void MemoryAwareLRUCache<KeyType, ValueType, HasherType>::put(const KeyType& key, const ValueType& value) {
auto it = _itemMap.find(key);
if (it != _itemMap.end()) {
_cacheSize -= it->second->second.memoryImpact();
_itemList.erase(it->second);
_itemMap.erase(it);
}
_itemList.emplace_front(key, value);
_itemMap.emplace(key, _itemList.begin());
_cacheSize += _itemList.begin()->second.memoryImpact();
clean();
}
template<typename KeyType, typename ValueType, typename HasherType>
bool MemoryAwareLRUCache<KeyType, ValueType, HasherType>::exist(const KeyType& key) const {
return _itemMap.count(key) > 0;
}
template<typename KeyType, typename ValueType, typename HasherType>
ValueType MemoryAwareLRUCache<KeyType, ValueType, HasherType>::get(const KeyType& key) {
//ghoul_assert(exist(key), "Key " << key << " must exist");
auto it = _itemMap.at(key);
// Move list iterator pointing to value
_itemList.splice(_itemList.begin(), _itemList, it);
return it->second;
}
template<typename KeyType, typename ValueType, typename HasherType>
size_t MemoryAwareLRUCache<KeyType, ValueType, HasherType>::size() const {
return _cacheSize;
}
template<typename KeyType, typename ValueType, typename HasherType>
size_t MemoryAwareLRUCache<KeyType, ValueType, HasherType>::maximumSize() const {
return _maximumCacheSize;
}
template<typename KeyType, typename ValueType, typename HasherType>
void MemoryAwareLRUCache<KeyType, ValueType, HasherType>::setMaximumSize(size_t maximumSize) {
_maximumCacheSize = maximumSize;
}
template<typename KeyType, typename ValueType, typename HasherType>
void MemoryAwareLRUCache<KeyType, ValueType, HasherType>::clean() {
while (_cacheSize > _maximumCacheSize) {
auto last_it = _itemList.end();
last_it--;
_itemMap.erase(last_it->first);
_cacheSize -= last_it->second.memoryImpact();
_itemList.pop_back();
}
}
} // namespace cache
} // namespace globebrowsing
} // namespace openspace
+256 -28
View File
@@ -24,59 +24,287 @@
#include <modules/globebrowsing/cache/memoryawaretilecache.h>
#include <modules/globebrowsing/rendering/layer/layergroupid.h>
#include <modules/globebrowsing/rendering/layer/layermanager.h>
#include <ghoul/ghoul.h>
#include <ghoul/logging/consolelog.h>
#include <ghoul/misc/invariants.h>
#include <ghoul/systemcapabilities/generalcapabilitiescomponent.h>
#include <numeric>
#include <algorithm>
namespace openspace {
namespace globebrowsing {
namespace cache {
MemoryAwareTileCache* MemoryAwareTileCache::_singleton = nullptr;
std::mutex MemoryAwareTileCache::_mutexLock;
void MemoryAwareTileCache::create(size_t cacheSize) {
std::lock_guard<std::mutex> guard(_mutexLock);
_singleton = new MemoryAwareTileCache(cacheSize);
namespace {
const char* _loggerCat = "MemoryAwareTileCache";
}
void MemoryAwareTileCache::destroy() {
std::lock_guard<std::mutex> guard(_mutexLock);
delete _singleton;
MemoryAwareTileCache::MemoryAwareTileCache()
: PropertyOwner("TileCache")
, _numTextureBytesAllocatedOnCPU(0)
, _cpuAllocatedTileData(
"cpuAllocatedTileData", "CPU allocated tile data (MB)",
1024, // Default
128, // Minimum
2048, // Maximum
1) // Step: One MB
, _gpuAllocatedTileData(
"gpuAllocatedTileData", "GPU allocated tile data (MB)",
1024, // Default
128, // Minimum
2048, // Maximum
1) // Step: One MB
, _tileCacheSize(
"tileCacheSize", "Tile cache size",
1024, // Default
128, // Minimum
2048, // Maximum
1) // Step: One MB
, _applyTileCacheSize("applyTileCacheSize", "Apply tile cache size")
, _clearTileCache("clearTileCache", "Clear tile cache")
, _usePbo("usePbo", "Use PBO", false)
{
createDefaultTextureContainers();
// Properties
_clearTileCache.onChange(
[&]{
clear();
});
_applyTileCacheSize.onChange(
[&]{
setSizeEstimated(_tileCacheSize * 1024 * 1024);
});
_cpuAllocatedTileData.setMaxValue(
CpuCap.installedMainMemory() * 0.25);
_gpuAllocatedTileData.setMaxValue(
CpuCap.installedMainMemory() * 0.25);
_tileCacheSize.setMaxValue(
CpuCap.installedMainMemory() * 0.25);
setSizeEstimated(_tileCacheSize * 1024 * 1024);
_cpuAllocatedTileData.setReadOnly(true);
_gpuAllocatedTileData.setReadOnly(true);
addProperty(_clearTileCache);
addProperty(_applyTileCacheSize);
addProperty(_cpuAllocatedTileData);
addProperty(_gpuAllocatedTileData);
addProperty(_tileCacheSize);
addProperty(_usePbo);
}
MemoryAwareTileCache& MemoryAwareTileCache::ref() {
std::lock_guard<std::mutex> guard(_mutexLock);
ghoul_assert(_singleton, "MemoryAwareTileCache not created");
return *_singleton;
}
MemoryAwareTileCache::~MemoryAwareTileCache()
{ }
void MemoryAwareTileCache::clear() {
std::lock_guard<std::mutex> guard(_mutexLock);
_tileCache.clear();
LINFO("Clearing tile cache");
_numTextureBytesAllocatedOnCPU = 0;
for (std::pair<const TileTextureInitData::HashKey,
TextureContainerTileCache>& p : _textureContainerMap)
{
p.second.first->reset();
p.second.second->clear();
}
LINFO("Tile cache cleared");
}
void MemoryAwareTileCache::createDefaultTextureContainers() {
for (int id = 0; id < layergroupid::NUM_LAYER_GROUPS; id++) {
TileTextureInitData initData =
LayerManager::getTileTextureInitData(layergroupid::ID(id));
assureTextureContainerExists(initData);
}
}
void MemoryAwareTileCache::assureTextureContainerExists(
const TileTextureInitData& initData)
{
TileTextureInitData::HashKey initDataKey = initData.hashKey();
if (_textureContainerMap.find(initDataKey) == _textureContainerMap.end()) {
// For now create 500 textures of this type
_textureContainerMap.emplace(initDataKey,
TextureContainerTileCache(
std::make_unique<TextureContainer>(initData, 500),
std::make_unique<TileCache>(std::numeric_limits<std::size_t>::max())
)
);
}
}
void MemoryAwareTileCache::setSizeEstimated(size_t estimatedSize) {
LINFO("Resetting tile cache size");
ghoul_assert(_textureContainerMap.size() > 0, "Texture containers must exist.");
size_t sumTextureTypeSize = std::accumulate(
_textureContainerMap.cbegin(),
_textureContainerMap.cend(), 0,
[](size_t s, const std::pair<const TileTextureInitData::HashKey,
TextureContainerTileCache>& p)
{
return s + p.second.first->tileTextureInitData().totalNumBytes();
}
);
size_t numTexturesPerType = estimatedSize / sumTextureTypeSize;
resetTextureContainerSize(numTexturesPerType);
LINFO("Tile cache size was reset");
}
void MemoryAwareTileCache::resetTextureContainerSize(size_t numTexturesPerTextureType) {
_numTextureBytesAllocatedOnCPU = 0;
for (std::pair<const TileTextureInitData::HashKey,
TextureContainerTileCache>& p : _textureContainerMap)
{
p.second.first->reset(numTexturesPerTextureType);
p.second.second->clear();
}
}
bool MemoryAwareTileCache::exist(ProviderTileKey key) const {
std::lock_guard<std::mutex> guard(_mutexLock);
return _tileCache.exist(key);
TextureContainerMap::const_iterator result =
std::find_if(_textureContainerMap.cbegin(), _textureContainerMap.cend(),
[&](const std::pair<const TileTextureInitData::HashKey,
TextureContainerTileCache>& p){
return p.second.second->exist(key);
});
return result != _textureContainerMap.cend();
}
Tile MemoryAwareTileCache::get(ProviderTileKey key) {
std::lock_guard<std::mutex> guard(_mutexLock);
return _tileCache.get(key);
TextureContainerMap::const_iterator it =
std::find_if(_textureContainerMap.cbegin(), _textureContainerMap.cend(),
[&](const std::pair<const TileTextureInitData::HashKey,
TextureContainerTileCache>& p){
return p.second.second->exist(key);
});
if (it != _textureContainerMap.cend()) {
return it->second.second->get(key);
}
else {
return Tile::TileUnavailable;
}
}
void MemoryAwareTileCache::put(ProviderTileKey key, Tile tile) {
std::lock_guard<std::mutex> guard(_mutexLock);
_tileCache.put(key, tile);
ghoul::opengl::Texture* MemoryAwareTileCache::getTexture(
const TileTextureInitData& initData)
{
ghoul::opengl::Texture* texture;
// if this texture type does not exist among the texture containers
// it needs to be created
TileTextureInitData::HashKey initDataKey = initData.hashKey();
assureTextureContainerExists(initData);
// Now we know that the texture container exists,
// check if there are any unused textures
texture = _textureContainerMap[initDataKey].first->getTextureIfFree();
// Second option. No more textures available. Pop from the LRU cache
if (!texture) {
Tile oldTile = _textureContainerMap[initDataKey].second->popLRU().second;
// Use the old tile's texture
texture = oldTile.texture();
}
return texture;
}
void MemoryAwareTileCache::setMaximumSize(size_t maximumSize) {
std::lock_guard<std::mutex> guard(_mutexLock);
_tileCache.setMaximumSize(maximumSize);
void MemoryAwareTileCache::createTileAndPut(ProviderTileKey key,
std::shared_ptr<RawTile> rawTile)
{
ghoul_precondition(rawTile, "RawTile can not be null");
using ghoul::opengl::Texture;
if (rawTile->error != RawTile::ReadError::None) {
return;
}
else {
const TileTextureInitData& initData = *rawTile->textureInitData;
Texture* texture = getTexture(initData);
// Re-upload texture, either using PBO or by using RAM data
if (rawTile->pbo != 0) {
texture->reUploadTextureFromPBO(rawTile->pbo);
if (initData.shouldAllocateDataOnCPU()) {
if (!texture->dataOwnership()) {
_numTextureBytesAllocatedOnCPU += initData.totalNumBytes();
}
texture->setPixelData(rawTile->imageData,
Texture::TakeOwnership::Yes);
}
}
else {
size_t previousExpectedDataSize = texture->expectedPixelDataSize();
ghoul_assert(texture->dataOwnership(),
"Texture must have ownership of old data to avoid leaks");
texture->setPixelData(rawTile->imageData, Texture::TakeOwnership::Yes);
size_t expectedDataSize = texture->expectedPixelDataSize();
size_t numBytes = rawTile->textureInitData->totalNumBytes();
ghoul_assert(expectedDataSize == numBytes, "Pixel data size is incorrect");
_numTextureBytesAllocatedOnCPU += numBytes - previousExpectedDataSize;
texture->reUploadTexture();
}
texture->setFilter(ghoul::opengl::Texture::FilterMode::AnisotropicMipMap);
Tile tile(texture, rawTile->tileMetaData, Tile::Status::OK);
TileTextureInitData::HashKey initDataKey = initData.hashKey();
_textureContainerMap[initDataKey].second->put(key, tile);
}
return;
}
MemoryAwareTileCache::MemoryAwareTileCache(size_t cacheSize)
: _tileCache(cacheSize) {}
void MemoryAwareTileCache::put(const ProviderTileKey& key,
const TileTextureInitData::HashKey& initDataKey, Tile tile)
{
_textureContainerMap[initDataKey].second->put(key, tile);
return;
}
void MemoryAwareTileCache::update() {
size_t dataSizeCPU = getCPUAllocatedDataSize();
size_t dataSizeGPU = getGPUAllocatedDataSize();
_cpuAllocatedTileData.setValue(dataSizeCPU / 1024 / 1024);
_gpuAllocatedTileData.setValue(dataSizeGPU / 1024 / 1024);
}
size_t MemoryAwareTileCache::getGPUAllocatedDataSize() const {
return std::accumulate(
_textureContainerMap.cbegin(),
_textureContainerMap.cend(), 0,
[](size_t s, const std::pair<const TileTextureInitData::HashKey,
TextureContainerTileCache>& p)
{
const TextureContainer& textureContainer = *p.second.first;
size_t bytesPerTexture =
textureContainer.tileTextureInitData().totalNumBytes();
return s + bytesPerTexture * textureContainer.size();
}
);
}
size_t MemoryAwareTileCache::getCPUAllocatedDataSize() const {
size_t dataSize = std::accumulate(
_textureContainerMap.cbegin(),
_textureContainerMap.cend(), 0,
[](size_t s, const std::pair<const TileTextureInitData::HashKey,
TextureContainerTileCache>& p)
{
const TextureContainer& textureContainer = *p.second.first;
const TileTextureInitData& initData = textureContainer.tileTextureInitData();
if (initData.shouldAllocateDataOnCPU()) {
size_t bytesPerTexture = initData.totalNumBytes();
return s + bytesPerTexture * textureContainer.size();
}
return s;
}
);
return dataSize + _numTextureBytesAllocatedOnCPU;
}
bool MemoryAwareTileCache::shouldUsePbo() const {
return _usePbo;
}
} // namespace cache
} // namespace globebrowsing
+49 -24
View File
@@ -25,12 +25,21 @@
#ifndef __OPENSPACE_MODULE_GLOBEBROWSING___MEMORY_AWARE_TILE_CACHE___H__
#define __OPENSPACE_MODULE_GLOBEBROWSING___MEMORY_AWARE_TILE_CACHE___H__
#include <modules/globebrowsing/cache/lrucache.h>
#include <modules/globebrowsing/cache/texturecontainer.h>
#include <modules/globebrowsing/tile/tile.h>
#include <modules/globebrowsing/tile/tileindex.h>
#include <modules/globebrowsing/cache/memoryawarelrucache.h>
#include <modules/globebrowsing/tile/rawtile.h>
#include <modules/globebrowsing/tile/rawtiledatareader/iodescription.h>
#include <openspace/properties/propertyowner.h>
#include <openspace/properties/scalar/boolproperty.h>
#include <openspace/properties/scalarproperty.h>
#include <openspace/properties/triggerproperty.h>
#include <memory>
#include <mutex>
#include <vector>
#include <unordered_map>
namespace openspace {
namespace globebrowsing {
@@ -63,47 +72,63 @@ struct ProviderTileHasher {
unsigned long long operator()(const ProviderTileKey& t) const {
unsigned long long key = 0;
key |= static_cast<unsigned long long>(t.tileIndex.level);
key |= static_cast<unsigned long long>(t.tileIndex.x << 5);
key |= static_cast<unsigned long long>(t.tileIndex.y << 35);
key |= static_cast<unsigned long long>(t.tileIndex.x) << 5ULL;
key |= static_cast<unsigned long long>(t.tileIndex.y) << 35ULL;
// Now the key is unique for all tiles, however not for all tile providers.
// Add to the key depending on the tile provider to avoid some hash collisions.
// (All hash collisions can not be avoided due to the limit in 64 bit for the
// hash key)
// Idea: make some offset in the place of the bits for the x value. Lesser chance
// of having different x-value than having different tile provider ids.
key += static_cast<unsigned long long>(t.providerID << 25);
key += static_cast<unsigned long long>(t.providerID) << 25ULL;
return key;
}
};
/**
* Singleton class used to cache tiles for all <code>CachingTileProvider</code>s.
*/
class MemoryAwareTileCache {
class MemoryAwareTileCache : public properties::PropertyOwner {
public:
static void create(size_t cacheSize);
static void destroy();
MemoryAwareTileCache();
~MemoryAwareTileCache();
void clear();
void setSizeEstimated(size_t estimatedSize);
bool exist(ProviderTileKey key) const;
Tile get(ProviderTileKey key);
void put(ProviderTileKey key, Tile tile);
void setMaximumSize(size_t maximumSize);
ghoul::opengl::Texture* getTexture(const TileTextureInitData& initData);
void createTileAndPut(ProviderTileKey key, std::shared_ptr<RawTile> rawTile);
void put(const ProviderTileKey& key,
const TileTextureInitData::HashKey& initDataKey, Tile tile);
void update();
static MemoryAwareTileCache& ref();
size_t getGPUAllocatedDataSize() const;
size_t getCPUAllocatedDataSize() const;
bool shouldUsePbo() const;
private:
/**
* \param cacheSize is the cache size given in bytes.
*/
MemoryAwareTileCache(size_t cacheSize);
~MemoryAwareTileCache() = default;
void createDefaultTextureContainers();
void assureTextureContainerExists(const TileTextureInitData& initData);
void resetTextureContainerSize(size_t numTexturesPerTextureType);
static MemoryAwareTileCache* _singleton;
MemoryAwareLRUCache<ProviderTileKey, Tile, ProviderTileHasher> _tileCache;
static std::mutex _mutexLock;
using TileCache = LRUCache<ProviderTileKey, Tile, ProviderTileHasher>;
using TextureContainerTileCache =
std::pair<std::unique_ptr<TextureContainer>, std::unique_ptr<TileCache>>;
using TextureContainerMap = std::unordered_map<TileTextureInitData::HashKey,
TextureContainerTileCache>;
TextureContainerMap _textureContainerMap;
size_t _numTextureBytesAllocatedOnCPU;
// Properties
properties::IntProperty _cpuAllocatedTileData;
properties::IntProperty _gpuAllocatedTileData;
properties::IntProperty _tileCacheSize;
properties::TriggerProperty _applyTileCacheSize;
properties::TriggerProperty _clearTileCache;
/// Whether or not pixel buffer objects should be used when uploading tile data
properties::BoolProperty _usePbo;
};
} // namespace cache
@@ -22,62 +22,70 @@
* OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. *
****************************************************************************************/
#ifndef __OPENSPACE_MODULE_GLOBEBROWSING___MEMORY_AWARE_LRU_CACHE___H__
#define __OPENSPACE_MODULE_GLOBEBROWSING___MEMORY_AWARE_LRU_CACHE___H__
#include <list>
#include <unordered_map>
#include <modules/globebrowsing/cache/texturecontainer.h>
#include <modules/globebrowsing/tile/tiletextureinitdata.h>
namespace openspace {
namespace globebrowsing {
namespace cache {
/**
* Least recently used cache that knows about its memory impact. This class is templated
* and the second template argument <code>ValueType</code> needs to have a function
* <code>void memoryImpact()</code> that returns the size of the object given in whatever
* unit is used for size in the creation of the <code>MemoryAwareLRUCache</code>.
* It can for example be given in kilobytes.
* <code>KeyType</code> needs to be a size comparable type.
*/
TextureContainer::TextureContainer(TileTextureInitData initData, size_t numTextures)
: _initData(initData)
, _freeTexture(0)
, _numTextures(numTextures)
{
reset();
}
template<typename KeyType, typename ValueType, typename HasherType>
class MemoryAwareLRUCache {
public:
/**
* \param maximumSize is the maximum size of the <code>MemoryAwareLRUCache</code>
* Once the maximum size is reached, the cache will start removing objects that were
* least recently used. The maximum size can for example be given in kilobytes. It
* must be the same size unit as used by the cached object class
* <code>ValueType</code>.
*/
MemoryAwareLRUCache(size_t maximumSize);
void TextureContainer::reset() {
_textures.clear();
_freeTexture = 0;
ghoul::opengl::Texture::AllocateData allocate =
_initData.shouldAllocateDataOnCPU() ?
ghoul::opengl::Texture::AllocateData::Yes :
ghoul::opengl::Texture::AllocateData::No;
for (size_t i = 0; i < _numTextures; ++i)
{
auto tex = std::make_unique<ghoul::opengl::Texture>(
_initData.dimensionsWithPadding(),
_initData.ghoulTextureFormat(),
_initData.glTextureFormat(),
_initData.glType(),
ghoul::opengl::Texture::FilterMode::Linear,
ghoul::opengl::Texture::WrappingMode::ClampToEdge,
allocate
);
tex->setDataOwnership(ghoul::opengl::Texture::TakeOwnership::Yes);
tex->uploadTexture();
tex->setFilter(ghoul::opengl::Texture::FilterMode::AnisotropicMipMap);
_textures.push_back(std::move(tex));
}
}
void put(const KeyType& key, const ValueType& value);
void clear();
bool exist(const KeyType& key) const;
ValueType get(const KeyType& key);
size_t size() const;
size_t maximumSize() const;
void TextureContainer::reset(size_t numTextures) {
_numTextures = numTextures;
reset();
}
void setMaximumSize(size_t maximumSize);
ghoul::opengl::Texture* TextureContainer::getTextureIfFree() {
ghoul::opengl::Texture* texture = nullptr;
if (_freeTexture < _textures.size()) {
texture = _textures[_freeTexture].get();
_freeTexture++;
}
return texture;
}
private:
void clean();
using Item = std::pair<KeyType, ValueType>;
using Items = std::list<Item>;
Items _itemList;
std::unordered_map<KeyType, decltype(_itemList.begin()), HasherType> _itemMap;
size_t _cacheSize;
size_t _maximumCacheSize;
const openspace::globebrowsing::TileTextureInitData& TextureContainer::tileTextureInitData() const {
return _initData;
};
} // namespace cache
} // namespace globebrowsing
} // namespace openspace
size_t TextureContainer::size() const {
return _textures.size();
};
#include <modules/globebrowsing/cache/memoryawarelrucache.inl>
#endif // __OPENSPACE_MODULE_GLOBEBROWSING___MEMORY_AWARE_LRU_CACHE___H__
}
}
}
@@ -22,37 +22,59 @@
* OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. *
****************************************************************************************/
#ifndef __OPENSPACE_MODULE_GLOBEBROWSING___MEMORY_AWARE_CACHEABLE___H__
#define __OPENSPACE_MODULE_GLOBEBROWSING___MEMORY_AWARE_CACHEABLE___H__
#ifndef __OPENSPACE_MODULE_GLOBEBROWSING___TEXTURE_CONTAINER___H__
#define __OPENSPACE_MODULE_GLOBEBROWSING___TEXTURE_CONTAINER___H__
#include <modules/globebrowsing/tile/tiletextureinitdata.h>
#include <memory>
#include <vector>
namespace openspace {
namespace globebrowsing {
namespace cache {
/**
* Base class to be extended by classes that need to be cached and make use of the
* memoryImpact interface. A class extending <code>MemoryAwareCacheable</code> needs to
* know its memory impact at initialization and hence provide the memory impact in its
* constructors. The memory impact can not change during the lifetime of an object that is
* a <code>MemoryAwareCacheable</code>.
* Owner of texture data used for tiles. Instead of dynamically allocating textures one
* by one, they are created once and reused.
*/
class MemoryAwareCacheable {
class TextureContainer
{
public:
/**
* \param memoryImpact is the memory impact of the object. Can for example be given
* in kilobytes.
* \param initData is the description of the texture type.
* \param numTextures is the number of textures to allocate.
*/
MemoryAwareCacheable(size_t memoryImpact) : _memoryImpact(memoryImpact) {};
~MemoryAwareCacheable() {};
TextureContainer(TileTextureInitData initData, size_t numTextures);
size_t memoryImpact() { return _memoryImpact; };
~TextureContainer() = default;
void reset();
void reset(size_t numTextures);
protected:
size_t _memoryImpact;
/**
* \returns a pointer to a texture if there is one texture never used before.
* If there are no textures left, nullptr is returned. TextureContainer still owns
* the texture so no delete should be called on the raw pointer.
*/
ghoul::opengl::Texture* getTextureIfFree();
const TileTextureInitData& tileTextureInitData() const;
/**
* \returns the number of textures in this TextureContainer
*/
size_t size() const;
private:
std::vector<std::unique_ptr<ghoul::opengl::Texture>> _textures;
size_t _freeTexture;
const TileTextureInitData _initData;
size_t _numTextures;
};
} // namespace cache
} // namespace globebrowsing
} // namespace openspace
#endif // __OPENSPACE_MODULE_GLOBEBROWSING___MEMORY_AWARE_CACHEABLE___H__
#endif // __OPENSPACE_MODULE_GLOBEBROWSING___TEXTURE_CONTAINER___H__