$include_dir="/home/hyper-archives/boost/include"; include("$include_dir/msg-header.inc") ?>
Subject: Re: [boost] new library (space partitioning)
From: Adam Wulkiewicz (adam.wulkiewicz_at_[hidden])
Date: 2010-08-07 09:03:55
Hi,
I couldn't decide what point concept is the best so I've implemented
kd-tree building for 3 kind of points to perform some tests and to work
out the basic structure of the library, data types adaptors etc.
These are 'internal' point concepts:
1. compile-time point
point_category = compiletime_point_tag
coordinate_type
coordinate_count
coordinate_type const & get<K>()
void set<K>(v)
2. runtime-time point
point_category = runtime_point_tag
coordinate_type
coordinate_count
coordinate_type const & get(k)
void set(k, v)
3. dynamic point
point_category = dynamic_point_tag
coordinate_type
coordinate_type const & get(k)
void set(k, v)
coordinate_count()
I don't know which one is the best for space partitioning (ct or rt).
There are some 'run-time' data structures (different BSP-tree, including
kd-tree) and there are 'compile-time' data structures
(quadtree/octree/..., regular grid, hierarchical grid, BVH tree).
'Run-time' and 'compile-time' means specific access to the coordinates.
Building of all data structures is performed in run-time because
coordinates values are set in runtime.
Kd-tree (kdtree.hpp) has 2 template parameters Point and PointAdaptor.
The default PointAdaptor is point_traits<Point>::point_adaptor.
One may write adaptor modeling any of concepts above. There are some
adaptors written allready (types.hpp). And if he doesn't want to pass
PointAdaptor each time he creates kd-tree, he may specialize point_traits.
So, kd-tree should work for:
user-defined types
boost::geometry - concept 1
boost::array - concept 2
std::vector, std::deque, uBlas vectors - concept 3
Example:
Lets assume that point is defined as follows:
template <typename V, size_t D>
class runtime_point
{
public:
typedef V coordinate_type;
static const size_t coordinate_count = D;
inline coordinate_type const& operator[](size_t i);
inline coordinate_type & operator[](size_t i);
private:
coordinate_type c[coordinate_count];
};
The adaptor looks like this:
template <typename Point>
struct runtime_point_indexop : private Point
{
typedef runtime_point_tag point_category;
typedef typename Point::coordinate_type coordinate_type;
static const size_t coordinate_count = Point::coordinate_count;
inline runtime_point_indexop() {}
inline runtime_point_indexop(Point const& pt) : Point(pt) {}
inline coordinate_type const& get(size_t i) const {
return Point::operator[](i); }
inline void set(size_t i, coordinate_type const& v) {
Point::operator[](i) = v; }
};
Btw, it's the default adaptor. One might write:
typedef runtime_point<float, 2> rtp;
std::vector<rtp> rtv;
// ... //
space::kdtree<rtp> rtkdt(rtv.begin(), rtv.end());
Or explicitly:
typedef space::runtime_point_indexop<rtp> rtpadapt;
space::kdtree<rtp, rtpadapt> rtkdt(rtv.begin(), rtv.end());
Or specialize point_traits in the first place:
namespace space {
template <typename T>
struct point_traits<rtp>
{
typedef runtime_point_indexop<rtp> point_adaptor;
};
}
And then:
space::kdtree<rtp> rtkdt(rtv.begin(), rtv.end());
In main.cpp there are:
- 2 user-defined point types defined:
* compiletime_point (boost::geometry::point interface)
* runtime_point (point with operator[] instead of get/set)
- point_traits specializations for:
* boost::array
* std::vector
- kd-tree build time tests
On my computer building takes (Core 2 Duo 2GHz, Release build, 50000
points, 2 coordinates per point, floats, full optimization):
MSVC10 GCC3.4.5
runtime_point 0.018s 0.023s
compiletime_point 0.017s 0.017s
boost::array 0.019s 0.022s
std::vector 9.5s 0.278s
runtime_point
adapted from 0.022s 0.023s
compiletime_point
compiletime_point
adapted from 0.017s 0.017s
runtime_point
Btw, the results for std::vector are wierd.
I think that it is nice to have space partitioning working for many
point types (even for std::vectors, uBlas etc). But there must be 3
different kind of algorithms for some cases.
It's not bad to convert from one representation to another. It's obvious
that it's the best to have algorithms working on compile-time points and
convert run-time points when necessary.
What do you think about it?
And what do you think about my data types, adaptors etc?
Regards,
Adam