We present results from testing of parallel versions of algorithms for derivativefree optimization. Such algorithms are required when an analytical expression for the objective function is not available, which often happens in simulator-driven product development. Since the objective function is unavailable, we cannot use the common algorithms that require gradient and Hessian information. Instead special algorithms for derivative-free optimization are used. Such algorithms are typically sequential, and here we present the rst test results of parallelization of such algorithms. The parallel extensions include using several start points, generating several points from each start point in each iteration, alternative model building, and more. We also investigate whether we can generate synergy between the di erent start points through information sharing. Examples of this include using several models to predict the objective function value of a point in order to prioritize the order in which points are sent for evaluation. We also present results for higher-level control of the optimization algorithms.