Image based visual servoing with kinematic singularity avoidance for mobile manipulator

View article
PeerJ Computer Science

Introduction

Visual servo control refers to the use of computer vision data control the pose of a robot (Chaumette & Hutchinson, 2006, 2007; Wang et al., 2024; Leomanni et al., 2024; Kumar et al., 2024). Estimating the pose of a robot is a crucial task in robotics (Ban et al., 2024). It forms the basis for numerous subsequent tasks, such as visually-guided object grasping and manipulation (Li et al., 2023; Tian et al., 2023; Zuo et al., 2019), autonomous navigation (Bultmann, Memmesheimer & Behnke, 2023), multi-robot collaboration (Li, De Wagter & De Croon, 2022; Papadimitriou, Mansouri & Nikolakopoulos, 2022) and human-robot interaction (Christen et al., 2023).

Exteroceptive sensors, such as cameras, offer a robust alternative that relies on external references rather than internal measurements. That is the reason why image based visual servoing (IBVS) control schemes have become so popular. In IBVS, the primary objective is to minimize the error between the current image features captured by a camera and the desired image features. This error is computed in the image space, and the control laws are designed to drive this error to zero, thereby achieving the desired pose of the robot. The camera used in IBVS can be configured in two ways: eye-to-hand, where the camera is fixed in the environment, or eye-in-hand, where the camera is mounted on the end-effector, which is the approach used in this work (Flandin, Chaumette & Marchand, 2000).

In this context, the implementation of IBVS is particularly beneficial for redundant manipulators, which require advanced control strategies to exploit their additional degrees of freedom effectively. Through the combination of visual feedback and advanced control terms, IBVS enables these robots to perform complex tasks with high precision and reliability. Some related works include (Ren, Li & Li, 2020; Heshmati-Alamdari et al., 2014; Karras et al., 2022; Heshmati-Alamdari et al., 2024), where IBVS is implemented in redundant manipulators using feature constraints to keep image features in the field of view.

Several studies have explored using redundancy to define different types of constraints by incorporating secondary tasks (increase manipulability) alongside the main task (visual servoing) (Mansard & Chaumette, 2009; Yoshikawa, 1996; Chaumette & Marchand, 2001; Rastegarpanah, Aflakian & Stolkin, 2021). However, these works do not consider mobile platforms.

In recent years, the introduction of mobile manipulators into the industry has transformed the viewpoint of both users and developers. These robots enhance the manipulation abilities of articulated industrial arms by extending their reach across the entire workshop area. However, in current industrial applications, the navigation and manipulation phases of mobile manipulators are typically separated for technical simplicity and safety considerations (Marvel & Bostelman, 2013; Markis et al., 2019); Nevertheless, it is possible to achieve cooperation between the mobile platform and manipulator arm simultaneously with visual servoing approaches (González Huarte & Ibarguren, 2024; Wang, Lang & De Silva, 2009; Lippiello, Siciliano & Villani, 2007).

Recently, hybrid approaches that combine IBVS and position-based visual servoing (PBVS) have been explored (Li & Xiong, 2021; Jo & Chwa, 2023). These approaches offer increased robustness to feature occlusion and compensate for inaccurate pose estimation due to slippage errors in odometry and wheel encoders, as IBVS does not rely on the internal odometry of the robot but instead uses visual feedback from the environment. However, hybrid visual servoing requires the integration of both image-based and position-based control loops, which increases the system’s complexity, making the design, tuning, and implementation of the control system more challenging.

Most research on visual servoing and mobile robots focuses on mobile robot navigation, with only a few instances of vision-based mobile manipulation being documented (Wang, Lang & De Silva, 2009). Many of the works that address the visual control of mobile platforms include non-holonomic constraints (Silveira et al., 2001; De, Bayer & Koditschek, 2014; Huang & Su, 2019; Zhao & Wang, 2020; Dirik, Kocamaz & Dönmez, 2020; Li & Xiong, 2021; Yu & Wu, 2024). It should be emphasized that when a mobile platform has non-holonomic constraints, it reduces the possibility of exploiting kinematic redundancies.

This work proposes a VS scheme for an eye-in-hand IBVS task on a redundant omnidirectional mobile manipulator. In addition, the proposed scheme deals with singularities based on DLS and manipulability measure maximization. To reduce singularities and smooth out the discontinuities, the inversion of the Jacobian matrix is solved using DLS. Since the mobile manipulator is redundant, a task prioritization scheme is proposed to perform the eye-in-hand IBVS as a primary task while maximizing a manipulability measure as a secondary task. Finally, a gravity compensation term is considered, especially for the case of real-world applications. The effectiveness of the proposed algorithm is demonstrated through both simulation and experimental results considering the Kuka YouBot which is an 8-DOF omnidirectional mobile manipulator. The simulation codes are shared to facilitate the replicability of this work.

The rest of this research is organized as follows: In “Image Based Visual Servoing (IBVS)” the reader will find an overview of visual servoing. In “Eye-in-hand IBVS for omnidirectional mobile manipulators”, the proposed eye-in-hand IBVS approach is addressed. This scheme is tested in both simulation and real-world experiments, the results are presented in “Experimental Results”. Then, An analysis of the obtained results and the future research directions are given in “Discussion”. Finally, the conclusions are discussed in “Conclusions”.

Image based visual servoing

Multiple schemes of visual servoing are found in the literature Chaumette & Hutchinson (2006, 2007). In this work, we implement IBVS with an eye-in-hand configuration. This approach is described below.

This method involves extracting relevant information from the visual input to guide the actions of the robots, such as positioning, navigating, or manipulating objects. The primary objective is to minimize the error between the current visual perception and the desired visual state

e(t)=s(m(t),a)swhere s is the vector of coordinates in the image plane, m(t)=(u,v) are the measurements of interest points (in pixel units) and a=(cu,cv,f,α) are additional parameters of the system (in this case, the intrinsic parameters of the camera): cu and cv are the coordinates of the principal point, f the focal length and α the ratio of the pixel dimensions. s is the vector of the desired coordinates.

The next step is to calculate a velocity controller based on the relationship between the time variation of s and the velocity of the camera, considering the case of controlling it with six degrees of freedom (DOF).

vc=(vc,ωc)where vc and ωc are the instantaneous linear and angular velocity of the origin of the camera. Features velocity s˙ and vc are related by a matrix known as interaction matrix and denoted by Ls

s˙=Lsvcwhere LsRk×6 and k is 2× the number of features. For each feature i, the interaction matrix is defined as

Ls=[Ls1Ls2Lsn]where

Lsi=[1Z0xZxy(1+x2)y01ZyZ1+y2xyx]where Z is the depth from the camera to the feature. In this work, this depth is estimated using prior knowledge of the ArUco pattern. x and y are the feature coordinates in the image plane given by

x=(ucu)/fα

y=(vcv)/f

The relationship between camera velocity and the variation of the error is given by

e˙=Lsvcand, to guarantee an exponential reduction of the error, the camera velocities are calculated with

vc=λLs+e

Eye-in-hand ibvs for omnidirectional mobile manipulators

This section provides the kinematic model for an omnidirectional mobile manipulator. Commonly, mobile manipulators are redundant robots which is beneficial for working with task priority schemes. The robot, with eight degrees of freedom, consists of an omnidirectional mobile platform with four mecanum wheels, allowing it to move to any position with a desired orientation (Zhang et al., 2016; Wu et al., 2017; Kundu et al., 2017).

Omnidirectional mobile manipulator kinematics

The eye-in-hand system with coordinate frame assignments for robot kinematics is illustrated in Fig. 1. We start defining the following homogeneous matrices: Tbw represents the position and orientation (pose) of the mobile platform, with respect to the world frame, Tmb is a constant transformation between the mobile platform and the manipulator base, Tem is the transformation from the manipulator base to the end-effector frame. Finally, Tec is the position and orientation of the end-effector, with respect to the camera frame.

Eye-in-hand system and coordinate frames assignment for a mobile manipulator.

Figure 1: Eye-in-hand system and coordinate frames assignment for a mobile manipulator.

The mobile platform pose Tbw is defined as

Tbw(qb)=[cos(θb)sin(θb)0xbsin(θb)cos(θb)0yb00100001]where qb=[xbybθb]T represents the position (xb,yb) and orientation θb of the omnidirectional platform.

Let qm=[q1q2qn]T be the manipulator configuration with n DOF. The manipulator kinematics Tem(qm) can be computed based on the Denavit-Hartenberg (DH) convention. Then, forward kinematic Tn0(qm) can be defined as

Tem(qm)=Tn0(qm)=T10(q1)T21(q2)Tnn1(qn)where Tii1 transforms the frame attached to the link i1 into the frame link i of the robot manipulator, with i=0,1,2,,n.

The joint variable for the mobile manipulator is given by q=[qbTqmT]. The forward kinematics Tew(q) of the mobile manipulator is obtained with

Tew(q)=Tbw(qb)bTmmTe(qm)=[Rewtew01]where Rew is the orientation and tew the position of the end-effector, with respect to the world frame w.

Inverse kinematics involves calculating the joint variables q given the desired end-effector pose. This calculation can be achieved by minimizing an error function through an iterative process based on differential kinematics (Sciavicco & Siciliano, 2012). Differential kinematics seeks to establish the relationship between the joint velocities q˙ and the end-effector velocity x˙.

x˙=J(q)q˙where J is a Jacobian matrix and x˙=(ve,ωe). ve and ωe are instantaneous linear and angular velocities of the end-effector. Moreover, q˙=[q˙bTq˙mT] where q˙bT are the mobile platform velocities (x˙b,y˙b,θ˙b), and q˙mT are the manipulator velocities in joint space.

The Jacobian matrix J can be computed as follows

J(q)=[Jv(q)Jω(q)]where Jv(q) matrix relating the contribution of the end-effector linear velocity, and Jω(q) matrix relating the contribution of the end-effector angular velocity.

The matrix Jv(q) is defined as

Jv(q)=[txxbtxybtxθbtxq1txq2txqntyxbtyybtyθbtyq1tyq2tyqntzxbtzybtzθbtzq1tzq2tzqn]where tew=[txtytz]T is the end-effector position related to the joint variable q. Moreover, the matrix Jω(q) is defined as

Jω(q)=[00z0wz1wzn1w]where zmw and ziw are the z-axis of the rotation matrices Rmw and Riw, respectively. Notice that the first two rows are set to 0 since xb and yb displacements do not provoke any end-effector angular velocity.

Rotation matrix Riw can be found in the homogeneous transformation Tiw(q), which is

Tiw(q)=Tbw(qb)bTmj=1j=iTjj1(qj)=[Riwtiw01].

Finally, the mapping between the mobile platform q˙bT to each wheel velocity is given by the following relationship

[v1v2v3v4]=[2sin(θb+π4)2cos(θb+π4)(L+l)2cos(θb+π4)2sin(θb+π4)(L+l)2cos(θb+π4)2sin(θb+π4)(L+l)2sin(θb+π4)2cos(θb+π4)(L+l)][x˙by˙bθ˙b]where vi is the linear velocity of wheel i, L is half of the distance between the front and the rear wheels, and l is half of the distance between the left and the right wheels, see Fig. 2.

Omnidireccional mobile platform consisting of four mecanum wheels.

Figure 2: Omnidireccional mobile platform consisting of four mecanum wheels.

Eye-in-hand scheme with task priority and gravity compensation

To consider an eye-in-hand scheme for mobile manipulator visual servoing, we propose to rewrite (Eq. (9)) to map the camera velocity control to the end-effector velocity expressed in the world frame, which is

vw=λ(LscVw)+ewhere Vwc is a spatial motion transform matrix from the camera frame to the world frame, and vw=(vw,ωw) contains the linear vw and angular ωw velocity expressed in the world frame. The matrix Vwc is given as

Vwc=[Rwc[twc]×0Rwc]where [twc]× is the skew-symmetric matrix associated with the vector twc, and Twc(cRw,ctw) is the transformation matrix from camera to world frame, which is computed as

Twc(q)=TecwTe(q)1=[Rwctwc01].

To map Cartesian velocities vw=x˙ to joint velocities q˙, using Eq. (13) we have

q˙=J(q)+vwwhere J(q)+ is the Moore-Penrose pseudoinverse of J(q).

To overcome the problem of inverting J(q)+ in the neighborhood of a singularity is provided by the damped least-square (DLS) inverse

J(q)+=J(q)T(J(q)J(q)T+β2I)1where β is a damping factor that improves the conditioning of the inversion from a numerical viewpoint. DLS helps reduce the impact of singular configurations, which occur when JJT is not full rank (that is, when its determinant is zero or any of its eigenvalues are zero).

Consider the eigenvalue decomposition JJT=UΣVT, the matrix Σ is a diagonal matrix containing the eigenvalues {σ1,σ2,,σn}. If we invert the decomposition, we get (JJT)1=VΣ+UT, where Σii+=1σi.

A singularity configuration will have some σi=0; we get a stable inversion if we add a positive scalar β2, as shown in Eq. (24).

Σii+=1σi+β2.

Then, the stabilized i-eigenvalue will be σi+β2. The final form (Eq. (25)) is the well-known solution of the Thikonov regularization (Nair, Hegland & Anderssen, 1997; Gerth, 2021).

U(Σ+β2I)VT=UΣVT+β2UVT=JJT+β2I.

Moreover, this work considers a task priority scheme to deal with feature error convergence of the visual servoing task while maximizing the mobile robot manipulability. The task priority scheme is given by

q˙=J(q)+vw+(IJ(q)+J(q))q˙0where vw is the main task, and (IJ(q)+J(q)) allows the projection of the second task q˙0 into the null space of J.

In this work, we propose a vector q˙0 to avoid singularities based on the manipulability measure m(q) defined as

m(q)=det(J(q)J(q)T).

The second task q˙0 is calculated with

q˙0=k0(m(q)q)where k0>0. The robot redundancy is exploited to avoid kinematic singularities by maximizing the manipulability measure. Indeed, q˙0 can be designed for collision avoidance by letting the joints reconfigure to avoid obstacles without affecting the primary task. Moreover, q˙0 can also be designed for joint limit avoidance by guiding the joint configuration towards the middle of their range while simultaneously performing the primary task.

Finally, a gravity compensation term is also considered, which is crucial for real-world implementations. Let us consider the gravitational potential energy as follows

u(q)=gi=1nmihi(q)where g is the gravity acceleration, mi is the mass of the link i and hi is the height of the center of mass and it depends on q. For simplicity, we estimate the center of mass at the middle of the link.

The gravity vector g(q) is calculated with

g(q)=qu(q).

Finally, the term g(q) in Eq. (30) is added to Eq. (26)

q˙=J(q)+vw+(IJ(q)+J(q))q˙0+g(q).

Equation (31) defines the proposed eye-in-hand scheme with task priority and gravity compensation for a mobile manipulator with an omnidirectional platform.

Experimental results

This scheme is tested in both simulation and real-world experiments. The applicability is demonstrated on the Kuka YouBot which is an omnidirectional mobile manipulator.

Experiments are performed in three parts. First, an ideal simulation is performed where no gravity is required. Second, a simulation is performed in the Coppelia simulator. In this scenario, the proposed approach is tested under non-modeled dynamics and gravity compensation is required. Third, real-world experiments were performed on the Kuka YouBot robot with a conventional RGB camera.

The YouBot consists of a 5-DOF manipulator and a 3-DOF omnidirectional mobile platform. In terms of the mobile manipulator kinematics, the transformation Tbw can be calculated using the pose of the mobile platform, defined by (xb,yb,θb) as shown in Eq. (10). The constant transformation Tmb is represented as follows

Tmb=[1000.14001000010.1510001].

The transformation Tem=T50 is obtained from the Denavit-Hartenberg table shown in Table 1. The joint variable q for the mobile manipulator will be

q=[xbybθbθ1θ2θ3θ4θ5]Twhere θ1 to θ5 represent the joint configuration of the manipulator.

Table 1:
DH table for the Kuka YouBot manipulator.
Joint a (mm) α (rad) d (mm) θ (rad)
1 33 π/2 147 θ1
2 155 0 0 θ2
3 135 0 0 θ3
4 0 π/2 0 θ4
5 0 0 217.5 θ5
DOI: 10.7717/peerj-cs.2559/table-1

The constant matrix transformation Tec is defined as

Tec=[01001000.050010.080001].

Model and control parameters were set as: λ=1.5, β=0.2, k0=2.6, L=0.2355 m, l=0.15 m, m2=m3=1.3 kg and m4+m5=2 kg. Mass m1 is not required to compute the gravity vector. All parameters were selected experimentally to achieve the best performance.

Simulation results

In simulation tests, four coordinates of interest points in the image were considered, where the desired image features sd was set as

sd=[0.13020.14920.13020.14920.13020.11130.13020.1113]T

Moreover, the intrinsic parameters of the camera were set as f=600, Cu=320, Cv=240. The simulation scenario of the ideal case is given in Fig. 3.

Simulation scenario of the ideal case.

Figure 3: Simulation scenario of the ideal case.

The ideal simulation aims to show that the second task in Eq. (26) is increasing the manipulability measure while performing the IBVS task with high priority.

Figure 4 shows the results when the secondary task is not included. Figure 4A the dashed lines represent the starting positions, while the solid lines indicate the desired positions. The colored dashed lines illustrate the trajectories of each keypoint. Although the keypoints reach the desired pose, Fig. 4B shows that, since the secondary task is not included, the manipulability measure does not exceed 0.3.

Ideal simulations without the manipulability term (secondary task).

Figure 4: Ideal simulations without the manipulability term (secondary task).

(A) Shows the trajectories of the features in pixel units and (B) shows the manipulability index.

Conversely, in Fig. 5, we present the results of the same simulation, but with the secondary task included. It is evident that the manipulability measure increases, in contrast to the previous experiment (Fig. 4).

Ideal simulations with the manipulability term (secondary task).

Figure 5: Ideal simulations with the manipulability term (secondary task).

(A) The trajectories of the features in pixel units and (B) the manipulability index.

As we can see, Figs. 4A and 5A the image point trajectories (colored dashed lines) confirm that the primary task is being performed successfully.

In addition, in Fig. 6A we show the transition of the image features, where the dashed lines can be seen converging at the desired positions, represented by solid lines. In Fig. 6B we present the joint velocities of the mobile manipulator while performing both the primary and secondary tasks. It can be observed that the velocities converge smoothly to a neighborhood of zero, indicating that the desired positions have been reached.

Ideal simulation.

Figure 6: Ideal simulation.

(A) Image features and (B) joint velocities.

A singularity test experiment is performed to validate the capacities of the proposed approach to avoid singularities and smooth out discontinuities by inverting the Jacobian matrix based on DLS, see Eq. (23). In this experiment, we also compared the proposal against the classical IBVS

vw=λ(LscVwJ(q))+ewhere the inversion of A+ with A=LscVwJ(q) was performed in two ways, using singular value decomposition (SVD) and using the well-known Moore-Penrose pseudoinverse A+=AT(AAT)1. SVD in the form A=UΣV is another method to deal with singularities. To compute the inverse of A, singular values along the diagonal of Σ that are less or equal to a tolerance are treated as zero. Then, all the non-zero values of Σ can be inverted.

Figure 7 illustrates the simulation scenario to perform the singularity test experiment. To identify singularities, Monte Carlo experiments were performed to randomly select numerical joint configurations that provoke the determinant of JJT is close to 0. Then, four coordinates of interest points in the 3D world were intentionally placed near the position of the end-effector related to a singular joint configuration.

Simulation scenario for the singularity test experiment.

Figure 7: Simulation scenario for the singularity test experiment.

Figure 8 presents the joint positions results along time of the singularity test experiment. As can be seen in Fig. 8A, the Moore-Penrose pseudoinverse suffers from singularities around 9 s of simulation time. Abrupt changes in joint position are presented. Figure 8B shows that SVD reduces the impact of singularities and all joint positions are admissible. However, Fig. 8C demonstrates that the proposed approach significantly reduces the impact of singularities, since joint positions are not perturbed at all.

Joint position results of the singularity test experiment.

Figure 8: Joint position results of the singularity test experiment.

(A) The joint position results of classical IBVS with Moore-Penrose pseudoinverse, (B) the classical IBVS with SVD to avoid singularities, and (C) the proposed IBVS scheme with DLS.

The joint velocity results of the singularity test experiment are reported in Fig. 9. In the case of the Moore-Penrose pseudoinverse, the presence of singularities provokes inadmissible joint velocities as shown in Fig. 9A. Despite the SVD reducing the impact of singularities, discontinuities are presented in some joint velocities which are not recommended for real-world applications, see Fig. 9B. Moreover, the joint velocities reported by the proposed approach are smooth, demonstrating that DLS smooth out discontinuities, see Fig. 9C. All velocities converge to a neighborhood of zero with admissible joint positions.

Joint velocity results of the singularity test experiment.

Figure 9: Joint velocity results of the singularity test experiment.

(A) The joint velocity results of classical IBVS with Moore-Penrose pseudoinverse, (B) the classical IBVS with SVD to avoid singularities, and (C) the proposed IBVS scheme with DLS.

The image points trajectories of the singularity test experiment are provided in Fig. 10. As expected, Fig. 10A shows that the task of IVBS with Moore-Penrose pseudoinverse failed. In this case, the image points trajectories present abrupt changes that go out of the image boundary. Moreover, Fig. 10B reports that the IVBS with SVD reached the desired image features, despite the velocities discontinuities. Finally, Fig. 10C illustrates that the proposed approach with DLS successfully reached the desired image features.

Image points trajectories of the singularity test experiment.

Figure 10: Image points trajectories of the singularity test experiment.

(A) Thee trajectories of the features of classical IBVS with Moore-Penrose pseudoinverse, (B) the classical IBVS with SVD to avoid singularities, and (C) the proposed IBVS scheme with DLS.

Now, the proposed approach is tested in the Coppelia simulator. These simulations aim to test the proposed scheme under non-modeled dynamics and gravity compensation. The setup of these simulations is shown in Fig. 11.

Setup in Coppelia simulator.

Figure 11: Setup in Coppelia simulator.

Photo credit: Javier Gomez-Avila.

From this experiment, it is important to report the error in the image features, as illustrated in Fig. 12. The figure clearly shows an offset in the y direction, which can be attributed to the influence of gravity on the system. This offset indicates that the image features are not aligning perfectly with the expected positions, likely due to gravitational effects acting on the manipulator during the operation.

Image features in Coppelia simulator before the gravity compensation.

Figure 12: Image features in Coppelia simulator before the gravity compensation.

Figure 13 presents the results of implementing gravity compensation. The data clearly demonstrate that the error in the y direction has been significantly reduced as a result of this compensation technique. This improvement indicates that the gravity compensation has effectively minimized the offset previously observed (Fig. 12), leading to better alignment of the image features with their expected positions.

Image features in Coppelia simulator with compensation of gravity.

Figure 13: Image features in Coppelia simulator with compensation of gravity.

Real-world experimental results

The experiments are conducted using the Kuka Youbot omnidirectional platform, which has an initial pose as shown in Fig. 14. The QR code is placed within the field of view of this initial position. The platform is equipped with encoders which have been used to measure joint positions. Moreover, Encoders have been also used for the wheel odometry to estimate the pose of the mobile platform.

Experimental setup. Initial (left image) and final (right image) positions.

Figure 14: Experimental setup. Initial (left image) and final (right image) positions.

Photo credit: Jesus Hernandez-Barragan.

Since the surface where the mobile platform moves is not completely flat, external disturbances by surface imperfections can produce slippages that cause errors in odometry. Even with the lack of accuracy in platform pose estimation, the proposed IBVS succeeds since it does not rely on the internal odometry (Li & Xiong, 2021; Jo & Chwa, 2023), but instead the visual feedback from the QR code.

For this experiment, we measure four image features from a QR code. Figure 14 shows the starting and ending positions in an example of the experimental setup. This experiment was performed over 30 s. In this scenario, the desired image features sd was set as

sd=[0.14590.175070.08650.17050.09290.05950.14910.0611]T

Moreover, the intrinsic parameters of the camera were calibrated as f=623.71, Cu=317.97864, Cv=257.93234.

The joint velocities and positions from the experiment are presented in Fig. 15. Although the control signals are not as smooth as those observed in the simulations, Fig. 15A shows that the velocities converge to a neighborhood of zero. Several challenges arose during the real-world implementation, including unmodeled dynamics, external perturbations, robot wear, and surface imperfections. Despite these challenges, the joint positions are still achieved smoothly, as shown in Fig. 15B.

Joint velocities (A) and joint position (B) results.

Figure 15: Joint velocities (A) and joint position (B) results.

Figure 16A presents the results of the image feature transition and Fig. 16B image point trajectories. The reported results indicate that the reference position is achieved, even in the presence of noisy image features from QR detection. Additionally, Fig. 16A shows that the offset caused by gravity has been mitigated. In Fig. 16B, the black solid square represents the desired position, while the red square indicates the final position.

Image feature results in pixel units (A) and their trajectories (B).

Figure 16: Image feature results in pixel units (A) and their trajectories (B).

Finally, Fig. 17 shows that the manipulability measure is increased as expected.

Manipulability.

Figure 17: Manipulability.

Discussion

Ideal simulations addressed the task prioritization scheme, where the eye-in-hand IBVS was the highest priority task, and manipulability maximization was the secondary task. Simulation results demonstrate that both tasks performed successfully with smooth image trajectories and control actions while maximizing the manipulability measure. Notice that with higher manipulability, it will reduce the incidence of singularities.

A singularity test experiment was performed to demonstrate the singularity avoidance capacities of the proposed approach. In this case, the proposed approach was compared against the classical IBVS where the inversion of required matrices was performed with two methods, the Moore-Penrose pseudoinverse and SVD. Reported results demonstrate that IBVS with Moore-Penrose pseudoinverse performed poorly, the IBVS task practically failed since this method does not handle singularities. Moreover, IBVS with SVD reduces the impact of singularities, and the IBVS task succeeds. However, the presence of singularities provokes significant joint velocities discontinuities that are impractical in real-world implementations. Finally, the proposed approach demonstrates that avoids singularity configurations since joint position and joint velocities results are admissible and it smooths out discontinuities.

Simulations in the Coppelia simulator addressed the importance of gravity compensation. Simulation results without the compensation term report a significant error in the y-axis of each image feature. However, the gravity compensation term effectively reduced this inconvenience.

The real-world experiments with the 8 DOF KUKA Youbot address the applicability of the proposed approach. The eye-in-hand IBVS was performed successfully with manipulability maximization along with gravity compensation. Although the input control signals of the experiments were not as smooth as in the simulations, position joint trajectories are still smooth. Notice that control action tries to compensate for non-modeled dynamics, external perturbations, and other drawbacks of the physical implementations which are not considered in simulated scenarios. Despite all these inconveniences, the proposed approach succeeds.

We use kinematic modeling for both the manipulator and the mobile platform to implement IBVS. No dynamic modeling is considered except for the gravity compensation.

Kinematic modeling uncertainties, such as kinematic modeling errors of the platform in Eq. (18) and the manipulator (Table 1) may arise since constants are defined based on technical specifications and practical procedures that have error tolerances. Additionally, there are parametric errors that can occur due to intrinsic and extrinsic camera calibration.

Finally, although we do not employ any techniques to address noisy image features (such as a Kalman filter) we still achieve the desired image features.

In this work, the VS task was implemented on an omnidirectional mobile manipulator. We let as future work, the design of kinematic modeling that includes mobile platforms with nonholonomic constraints. Moreover, the dynamic modeling of the mobile manipulator can be also included in the VS scheme. Finally, this work can be extended to the use of the eye-to-hand configuration.

Conclusions

The article presented satisfactory results from implementing eye-in-hand IBVS enhanced with manipulability maximization and gravity compensation. A redundant omnidirectional mobile manipulator KUKA Youbot was considered as a case study. Both, simulation and real-time experimentation demonstrated the effectiveness of this approach.

To demonstrate the effectiveness of our proposal and address the challenge of inverting J(q)+ near singularities, we applied the damped least-squares (DLS) inverse method. In simulation, the mobile manipulator was directed to approach a kinematic singularity. Results show that, with the classical method, velocities escalate significantly as the manipulator moves toward this configuration. In contrast, our proposed approach enables the manipulator to avoid the singular configuration, consistently maintaining velocities within acceptable limits.

Additionally, the secondary task, aimed at increasing the manipulability index, assists in ensuring that once the primary task is completed (reducing the positional error of the QR code in the image), the manipulator moves away from singular configurations. This approach not only helps achieve the primary positioning goal but also enhances the stability and performance of the system by avoiding configurations where control may become unstable.

Supplemental Information

Repository link.

This repository contains the code and resources for “Image Based Visual Servoing with kinematic singularity avoidance for mobile manipulator”. The repository includes matlab and coppelia simulations.

The repository can be found in GitHub:

https://github.com/Jegovila/IBVS_YouBot DOI: 10.5281/zenodo.14035156 (https://doi.org/10.5281/zenodo.14035155).

DOI: 10.7717/peerj-cs.2559/supp-1
1 Citation   Views   Downloads