This release is a fairly large reorganization of the code. The purpose of all this was to support making sure C++-created objects have their full object wrapper and not the wrapper of one of their base classes when these objects surface to Python.
There are two cases:
1. A C++-created object is returned from some accessor function.
2. A C++-created object is passed back through the callback data object of a callback. Previously, we supported downcasting of the callback data, but the situation remained where the callback data object contained references to C++-created objects. The example that brought all this to light is Vrui's ToolManager's tool creation callbacks. The callback data object for these callbacks has a reference to the tool that was just created. Though the callback data was correctly downcast to a ToolCreationCallbackData object, the “tool” field contained an object that was wrapped for a generic Tool type–not too useful.
Version 0.4.0 of pypputils now supports case #1. It has a new configuration_base method, “get_dynamic_cast_accessors”, to return a list of specifications for accessor methods and, for each, a list of C++ types to be tried via dynamic_cast before the object to returned to Python. If one matches, then a Boost.Python to_python converter is employed to get the wrapper corresponding to that C++ type. If this has been doen before, no matter. But it is important that the desired wrapper be established for the object the first time it surfaces to Python.
This release has an incomplete implementation of the callback case #2 above. For each callback, there will be an optional list of field names for the callback data and, for each field name, a list of C++ types to be run over that field's value to try to establish a wrapper.
This preliminary release is to permit testing of the other changes that have come about. I realized that no specific tools (e.g., LocatorTool) had yet been exposed to Python. In the process of doing this, I found that some tools' headers could not be included on their own (because they did not include all their dependencies. I quickly learned that the way I had implemented pyVrui where each header to the exposed was fed to gccxml one at a time. The current version now defines a file pyVrui_headers.h that includes all the Vrui headers. I had tried doing this earlier and ran into the problem that no declarations were defined at the end. I figured out the problem: Py++ automatically excludes declarations that do not come from any of the directories specified to gccxml, presumably to eliminate system headers that might be included along the way. So when I included pyVrui_headers.h from the pyVrui directory, all the Vrui stuff was excluded by Py++. The approach now is to copy pyVrui_headers.h into the Vrui source directory and then process it from there. Hopefully this will not be a problem….
BUT, there are two good outcomes. First, the generation of the C++ stub files (the Py++ phase) is MUCH faster. Second, there are no longer the annoying “second to-python converter registered” messages when the Vrui extension module is loaded.
One final note. This approach is a somewhat static solution in that only those types that were available during the creation of the Vrui extension module will be tried for the dynamic_casts. I don't have a solution for that. However, it will get us a long way: we should be able to use the LocatorTool for example.
P.S. I initially exposed all the Tools Vrui defines and, after a long day of tracking down and fixing missing call policies, was rewarded with a pile of undefined symbol errors when I finally got to the link phase. I then discovered that libVrui only links the code for the following subset of tools:
These last two I have not yet supported. The Vrui makefile has a variable “VRUI_TOOLHEADERS” which defines the first set's headers, but then libVrui is linked with the last two in addition. I'll have to look into that….