首頁 > 軟體

Python中ROS和OpenCV結合處理影象問題

2022-06-07 14:07:07

一、安裝ROS-OpenCV

安裝OpenCVsudo apt-get install ros-kinetic-vision-opencv libopencv-dev python-opencv
ROS進行影象處理是依賴於OpenCV庫的。ROS通過一個叫CvBridge的功能包,將獲取的影象資料轉換成OpenCV的格式,OpenCV處理之後,傳回給ROS進行影象顯示(應用),如下圖:

二、簡單案例分析

我們使用ROS驅動獲取攝像頭資料,將ROS獲得的資料通過CvBridge轉換成OpenCV需要的格式,呼叫OpenCV的演演算法庫對這個圖片進行處理(如畫一個圓),然後返回給ROS進行rviz顯示。

1.usb_cam.launch

首先我們建立一個launch檔案,可以呼叫攝像頭驅動獲取影象資料。執行launch檔案roslaunch xxx(功能包名) usb_cam.launch

<launch>
    <node name="usb_cam" pkg="usb_cam" type="usb_cam_node" output="screen" >
      <param name="video_device" value="/dev/video0" />
      <param name="image_width" value="1280" />
      <param name="image_height" value="720" />
      <param name="pixel_format" value="yuyv" />
      <param name="camera_frame_id" value="usb_cam" />
      <param name="io_method" value="mmap"/>
    </node>
</launch>

2.cv_bridge_test.py

建立一個py檔案,是python2的。實現接收ROS發的影象資訊,在影象上畫一個圓後,返回給ROS。返回的話題名稱是cv_bridge_image。執行py檔案rosrun xxx(功能包名) cv_bridge_test.py
如果出現許可權不夠的情況,記得切換到py檔案目錄下執行:sudo chmod +x *.py

#!/usr/bin/env python
# -*- coding: utf-8 -*-

import rospy
import cv2
from cv_bridge import CvBridge, CvBridgeError
from sensor_msgs.msg import Image

class image_converter:
    def __init__(self):    
        # 建立cv_bridge,宣告影象的釋出者和訂閱者
        self.image_pub = rospy.Publisher("cv_bridge_image", Image, queue_size=1)
        self.bridge = CvBridge()
        self.image_sub = rospy.Subscriber("/usb_cam/image_raw", Image, self.callback)

    def callback(self,data):
        # 使用cv_bridge將ROS的影象資料轉換成OpenCV的影象格式
        try:
            cv_image = self.bridge.imgmsg_to_cv2(data, "bgr8")
        except CvBridgeError as e:
            print e

        # 在opencv的顯示視窗中繪製一個圓,作為標記
        (rows,cols,channels) = cv_image.shape
        if cols > 60 and rows > 60 :
            cv2.circle(cv_image, (60, 60), 30, (0,0,255), -1)

        # 顯示Opencv格式的影象
        cv2.imshow("Image window", cv_image)
        cv2.waitKey(3)

        # 再將opencv格式額資料轉換成ros image格式的資料釋出
        try:
            self.image_pub.publish(self.bridge.cv2_to_imgmsg(cv_image, "bgr8"))
        except CvBridgeError as e:
            print e

if __name__ == '__main__':
    try:
        # 初始化ros節點
        rospy.init_node("cv_bridge_test")
        rospy.loginfo("Starting cv_bridge_test node")
        image_converter()
        rospy.spin()
    except KeyboardInterrupt:
        print "Shutting down cv_bridge_test node."
        cv2.destroyAllWindows()

3.rqt_image_view

在終端下執行rqt_image_view,訂閱cv_bridge_image話題,可以發現OpenCV處理之後的影象在ROS中顯示出來。

三、CvBridge相關API

1.imgmsg_to_cv2()

將ROS影象訊息轉換成OpenCV影象資料;

# 使用cv_bridge將ROS的影象資料轉換成OpenCV的影象格式
try:
    cv_image = self.bridge.imgmsg_to_cv2(data, "bgr8")
except CvBridgeError as e:
    print e

2.cv2_to_imgmsg()

將OpenCV格式的影象資料轉換成ROS影象訊息;

# 再將opencv格式額資料轉換成ros image格式的資料釋出
try:
    self.image_pub.publish(self.bridge.cv2_to_imgmsg(cv_image, "bgr8"))
except CvBridgeError as e:
    print e

四、利用ROS+OpenCV實現人臉檢測案例

1.usb_cam.launch

這個launch和上一個案例一樣先開啟攝像頭驅動獲取影象資料。執行launch檔案roslaunch xxx(功能包名) usb_cam.launch

<launch>
    <node name="usb_cam" pkg="usb_cam" type="usb_cam_node" output="screen" >
      <param name="video_device" value="/dev/video0" />
      <param name="image_width" value="1280" />
      <param name="image_height" value="720" />
      <param name="pixel_format" value="yuyv" />
      <param name="camera_frame_id" value="usb_cam" />
      <param name="io_method" value="mmap"/>
    </node>
</launch>

2.face_detector.launch

人臉檢測演演算法採用基於Harr特徵的級聯分類器物件檢測演演算法,檢測效果並不佳。但是這裡只是為了演示如何使用ROS和OpenCV進行影象處理,所以不必在乎演演算法本身效果。整個launch呼叫了一個py檔案和兩個xml檔案,分別如下:

2.1 launch

<launch>
    <node pkg="robot_vision" name="face_detector" type="face_detector.py" output="screen">
        <remap from="input_rgb_image" to="/usb_cam/image_raw" />
        <rosparam>
            haar_scaleFactor: 1.2
            haar_minNeighbors: 2
            haar_minSize: 40
            haar_maxSize: 60
        </rosparam>
        <param name="cascade_1" value="$(find robot_vision)/data/haar_detectors/haarcascade_frontalface_alt.xml" />
        <param name="cascade_2" value="$(find robot_vision)/data/haar_detectors/haarcascade_profileface.xml" />
    </node>
</launch>

2.2 face_detector.py

#!/usr/bin/env python
# -*- coding: utf-8 -*-
import rospy
import cv2
import numpy as np
from sensor_msgs.msg import Image, RegionOfInterest
from cv_bridge import CvBridge, CvBridgeError

class faceDetector:
    def __init__(self):
        rospy.on_shutdown(self.cleanup);

        # 建立cv_bridge
        self.bridge = CvBridge()
        self.image_pub = rospy.Publisher("cv_bridge_image", Image, queue_size=1)

        # 獲取haar特徵的級聯表的XML檔案,檔案路徑在launch檔案中傳入
        cascade_1 = rospy.get_param("~cascade_1", "")
        cascade_2 = rospy.get_param("~cascade_2", "")

        # 使用級聯表初始化haar特徵檢測器
        self.cascade_1 = cv2.CascadeClassifier(cascade_1)
        self.cascade_2 = cv2.CascadeClassifier(cascade_2)

        # 設定級聯表的引數,優化臉部辨識,可以在launch檔案中重新設定
        self.haar_scaleFactor  = rospy.get_param("~haar_scaleFactor", 1.2)
        self.haar_minNeighbors = rospy.get_param("~haar_minNeighbors", 2)
        self.haar_minSize      = rospy.get_param("~haar_minSize", 40)
        self.haar_maxSize      = rospy.get_param("~haar_maxSize", 60)
        self.color = (50, 255, 50)

        # 初始化訂閱rgb格式影象資料的訂閱者,此處影象topic的話題名可以在launch檔案中重對映
        self.image_sub = rospy.Subscriber("input_rgb_image", Image, self.image_callback, queue_size=1)

    def image_callback(self, data):
        # 使用cv_bridge將ROS的影象資料轉換成OpenCV的影象格式
        try:
            cv_image = self.bridge.imgmsg_to_cv2(data, "bgr8")     
            frame = np.array(cv_image, dtype=np.uint8)
        except CvBridgeError, e:
            print e

        # 建立灰度影象
        grey_image = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

        # 建立平衡直方圖,減少光線影響
        grey_image = cv2.equalizeHist(grey_image)

        # 嘗試檢測人臉
        faces_result = self.detect_face(grey_image)

        # 在opencv的視窗中框出所有人臉區域
        if len(faces_result)>0:
            for face in faces_result: 
                x, y, w, h = face
                cv2.rectangle(cv_image, (x, y), (x+w, y+h), self.color, 2)

        # 將識別後的影象轉換成ROS訊息並行布
        self.image_pub.publish(self.bridge.cv2_to_imgmsg(cv_image, "bgr8"))

    def detect_face(self, input_image):
        # 首先匹配正面人臉的模型
        if self.cascade_1:
            faces = self.cascade_1.detectMultiScale(input_image, 
                    self.haar_scaleFactor, 
                    self.haar_minNeighbors, 
                    cv2.CASCADE_SCALE_IMAGE, 
                    (self.haar_minSize, self.haar_maxSize))
                                         
        # 如果正面人臉匹配失敗,那麼就嘗試匹配側面人臉的模型
        if len(faces) == 0 and self.cascade_2:
            faces = self.cascade_2.detectMultiScale(input_image, 
                    self.haar_scaleFactor, 
                    self.haar_minNeighbors, 
                    cv2.CASCADE_SCALE_IMAGE, 
                    (self.haar_minSize, self.haar_maxSize))
        
        return faces

    def cleanup(self):
        print "Shutting down vision node."
        cv2.destroyAllWindows()

if __name__ == '__main__':
    try:
        # 初始化ros節點
        rospy.init_node("face_detector")
        faceDetector()
        rospy.loginfo("Face detector is started..")
        rospy.loginfo("Please subscribe the ROS image.")
        rospy.spin()
    except KeyboardInterrupt:
        print "Shutting down face detector node."
        cv2.destroyAllWindows()

2.3 兩個xml檔案

連結

3.rqt_image_view

執行完上述兩個launch檔案後,在終端下執行rqt_image_view,訂閱cv_bridge_image話題,可以發現OpenCV處理之後的影象在ROS中顯示出來。

五、利用ROS+OpenCV實現幀差法物體追蹤

1.usb_cam.launch

這個launch和前兩個案例一樣先開啟攝像頭驅動獲取影象資料。執行launch檔案roslaunch xxx(功能包名) usb_cam.launch

<launch>
    <node name="usb_cam" pkg="usb_cam" type="usb_cam_node" output="screen" >
      <param name="video_device" value="/dev/video0" />
      <param name="image_width" value="1280" />
      <param name="image_height" value="720" />
      <param name="pixel_format" value="yuyv" />
      <param name="camera_frame_id" value="usb_cam" />
      <param name="io_method" value="mmap"/>
    </node>
</launch>

2.motion_detector.launch

物體追蹤方法採用幀差法,追蹤效果並不佳。但是這裡只是為了演示如何使用ROS和OpenCV進行影象處理,所以不必在乎演演算法本身效果。整個launch呼叫了一個py檔案,如下:

2.1 launch

<launch>
    <node pkg="robot_vision" name="motion_detector" type="motion_detector.py" output="screen">
        <remap from="input_rgb_image" to="/usb_cam/image_raw" />
        <rosparam>
            minArea: 500
            threshold: 25
        </rosparam>
    </node>
</launch>

2.2 motion_detector.py

#!/usr/bin/env python
# -*- coding: utf-8 -*-
import rospy
import cv2
import numpy as np
from sensor_msgs.msg import Image, RegionOfInterest
from cv_bridge import CvBridge, CvBridgeError

class motionDetector:
    def __init__(self):
        rospy.on_shutdown(self.cleanup);

        # 建立cv_bridge
        self.bridge = CvBridge()
        self.image_pub = rospy.Publisher("cv_bridge_image", Image, queue_size=1)

        # 設定引數:最小區域、閾值
        self.minArea   = rospy.get_param("~minArea",   500)
        self.threshold = rospy.get_param("~threshold", 25)

        self.firstFrame = None
        self.text = "Unoccupied"

        # 初始化訂閱rgb格式影象資料的訂閱者,此處影象topic的話題名可以在launch檔案中重對映
        self.image_sub = rospy.Subscriber("input_rgb_image", Image, self.image_callback, queue_size=1)

    def image_callback(self, data):
        # 使用cv_bridge將ROS的影象資料轉換成OpenCV的影象格式
        try:
            cv_image = self.bridge.imgmsg_to_cv2(data, "bgr8")     
            frame = np.array(cv_image, dtype=np.uint8)
        except CvBridgeError, e:
            print e

        # 建立灰度影象
        gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
        gray = cv2.GaussianBlur(gray, (21, 21), 0)

        # 使用兩幀影象做比較,檢測移動物體的區域
        if self.firstFrame is None:
            self.firstFrame = gray
            return  
        frameDelta = cv2.absdiff(self.firstFrame, gray)
        thresh = cv2.threshold(frameDelta, self.threshold, 255, cv2.THRESH_BINARY)[1]

        thresh = cv2.dilate(thresh, None, iterations=2)
        binary, cnts, hierarchy= cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

        for c in cnts:
            # 如果檢測到的區域小於設定值,則忽略
            if cv2.contourArea(c) < self.minArea:
               continue 

            # 在輸出畫面上框出識別到的物體
            (x, y, w, h) = cv2.boundingRect(c)
            cv2.rectangle(frame, (x, y), (x + w, y + h), (50, 255, 50), 2)
            self.text = "Occupied"

        # 在輸出畫面上打當前狀態和時間戳資訊
        cv2.putText(frame, "Status: {}".format(self.text), (10, 20),
            cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)

        # 將識別後的影象轉換成ROS訊息並行布
        self.image_pub.publish(self.bridge.cv2_to_imgmsg(frame, "bgr8"))

    def cleanup(self):
        print "Shutting down vision node."
        cv2.destroyAllWindows()

if __name__ == '__main__':
    try:
        # 初始化ros節點
        rospy.init_node("motion_detector")
        rospy.loginfo("motion_detector node is started...")
        rospy.loginfo("Please subscribe the ROS image.")
        motionDetector()
        rospy.spin()
    except KeyboardInterrupt:
        print "Shutting down motion detector node."
        cv2.destroyAllWindows()

3.rqt_image_view

執行完上述兩個launch檔案後,在終端下執行rqt_image_view,訂閱cv_bridge_image話題,可以發現OpenCV處理之後的影象在ROS中顯示出來。(鑑於我的測試環境比較糟糕,並且這個演演算法本身精度不高,就不展示最終效果了)

到此這篇關於Python中ROS和OpenCV結合處理影象問題的文章就介紹到這了,更多相關ROS和OpenCV處理影象內容請搜尋it145.com以前的文章或繼續瀏覽下面的相關文章希望大家以後多多支援it145.com!


IT145.com E-mail:sddin#qq.com