Hey @Havremunken,
In my scene hook I also implemented the Read, Write and CopyTo (I understand how these go together, but will CopyTo ever really be used in a scene hook? I placed a breakpoint in the function and it didn't get called yet).
Yes, you really must do this. NodeData::CopyTo is the plugin layer endpoint for C4DAtom::GetClone and Cinema 4D clones scenes and scene elements all the time. A good example is rendering, for everything but in editor renderings, Cinema 4D will clone the whole document which is about to be rendered. When you then do not implement NodeData::CopyTo your scene hook in the cloned scene will not have its internal data, e.g., the GeListHead that holds the assets will be nullptr. And while it is true that scene hooks are unlikely to be cloned directly due to to their singleton-like nature, there is no guarantee that not some backend system does clone scene hooks for some technical reason. You having set a breakpoint does not mean much, as it is literally impossible to test all the scenarios where Cinema 4D wants to draw a copy of your node. This can then all lead to access violations and more.
Ok, time to save a document ...
In your code, I do not see you serializing your custom branch and the manual allocation (and serializing) of child nodes is not so good:
Bool Read(GeListNode* node, HyperFile* hf, Int32 level) override
{
SUPER::Read (node, hf, level);
/// ...
for (auto i = 0; i < _currentCacheSize; i++)
{
// Big NO-NO, never manually allocate nodes in deserialization. Technically possible but should
// be avoided.
auto newNode = AllocSmallListNode (ID_MORPHINETABLE_URLCACHE_ASSETNODE);
auto const assetData = newNode->GetNodeData<UrlCacheAssetData>();
if (assetData != nullptr)
{
String url;
if (!hf->ReadString (&url))
return false;
String data;
if (!hf->ReadString (&data))
return false;
assetData->SetData (url, data);
// not good, `newNode` has a different GeMarker (think of it as the UUID of the node) than the
// node in the original scene. you might also miss data which Cinema 4D has written into that
// original node.
_assetHead->InsertLast (newNode);
}
}
return true;
}
You must really must do what I show in my example, this will automatically serialize all the data stored in the branch. As I write in my example C4DAtom::ReadObject and WriteObject should be taken with grain of salt. 'Object' is meant here in the sense of 'element'. You can write every type of C4DAtom with this, e.g., also our custom GeListHead and all its children.
Bool Read(GeListNode* node, HyperFile* hf, Int32 level)
{
// Call the base implementation, more formality than necessity in our case.
SUPER::Read(node, hf, level);
if (!_assetHead->ReadObject(hf, true)) // Deserialize the data of our custom branch.
return false;
return true;
}
Bool Write(const GeListNode* node, HyperFile* hf) const
{
// Call the base implementation, more formality than necessity in our case.
SUPER::Write(node, hf);
if (!_assetHead->WriteObject(hf)) // Serialize the data of our custom branch.
return false;
return true;
}
When you have then specialized data in the nodes in your branch (UrlCacheAssetData in your case), you can do three things:
BAD: Store the extra data with the scene hook as you do in your code where you write the strings into the hyper file chunk of the scene hook. While this technically works and can be done intentionally for performance reasons, storing data from children in their parent is not so good, as the tends to become very complicated very quickly.
BETTER: Store the data in the hyper file chunk of each node which owns the data, i.e., overwrite UrlCacheAssetData::Read, ::Write, and ::CopyTo.
GOOD: When you only want to store atomic data which can be expressed in a data container and you own the node implementation, I would just store it there, as you then do not have to do all the serialization dance in UrlCacheAssetData. E.g., this:
// PSEUDO CODE, not compiled or tested, take with a grain of salt.
// A node whose implementation we own, i.e., we own the address space of its data container and can
// write everywhere we want to.
BaseList2D* asset = BaseList2D::Alloc(ID_MORPHINETABLE_URLCACHE_ASSETNODE);
// Just write the data into the node.
BaseContainer* data = asset->GetDataInstance();
data->SetString(ID_MORPHINETABLE_URL, "www.google.com"_s);
data->SetString(ID_MORPHINETABLE_DATA, "Bob's your uncle."_s);
// Insert the node into the branch of your scene hook. Its data will now be stored with the scene.
// You can of course also go the inverse route to read data or modify the data by getting the node
// and then getting its data container again.
// We technically can also do the same for nodes where we do not own the implementation. Here we just
// store the data in a custom container under a custom plugin ID.
// We do not own the cube impl.
BaseList2D* cube = BaseList2D::Alloc(Ocube);
// A custom plugin ID we registered to store alien data in a collision free manner in Ocube.
const Int32 pid = 123456789;
// Create a container (without #pid as its ID) and just write some data into it, the IDs do not matter here.
BaseContainer bc (pid);
bc.SetString(0, "www.google.com"_s);
bc.SetString(1, "Bob's your uncle."_s);
// Get the data container of the node.
BaseContainer* data = cube->GetDataInstance();
// Write the data making sure we do not overwrite native data.
// There is no alien data at all.
GeData nativeData;
if (data.FindIndex(pid, nativeData) == NOTOK)
data.SetContainer(pid, bc);
// There is already data and it is of type #BaseContainer, and it is a container marked with the
// id #pid, so it is one of our containers, we can safely overwrite things.
else if (nativeData.GetType() == DA_CONTAINER && nativeData.GetContainer().GetId() == pid)
data.SetContainer(pid, bc);
// There is already native data at #pid and it is not ours, we are screwed. This is not impossible
// to happen as some nodes dynamically write data into their data container (just as we did above
// for #ID_MORPHINETABLE_URLCACHE_ASSETNODE), but it is quite unlikely to happen in the range of
// plugin IDs.
else
CrashAndPretendItWasNotOurFault();
Finally, 0xFFFFFFFFFFFFFFFF does have a special meaning. It is the end of the 64 bit address space and operating systems/debuggers usually use this address when an access violations occurs. I.e., you have pointer p which points to some point in memory (and is therefore not the nullptr) which holds a FooThing. We then try to call p->Bar() (i.e., FooThing::Bar). But the FooThing at p is long deleted, the memory at p with the size of FooThing is either empty or does nor match the layout of FooThing. An access violation happens, the fireworks begin. TLDR: 0xFFFFFFFFFFFFFFFF is an alias for a corrupted/dangling pointer.
In your case probably something with the irregular instantiated ID_MORPHINETABLE_URLCACHE_ASSETNODE node from your NodeData::Free is going wrong. Or something else, I cannot really tell without the full source code. When you decide to share your code, please ideally share your project, not just some .h/.cpp files. You can share code confidentially via [email protected].
Cheers,
Ferdinand