« 2020年10月 | トップページ | 2020年12月 »

2020年11月30日 (月)

センサフュージョン(15)

 カメラによる物体認識は、基本的にパッシブ手法である。しかし、カメラを使っていても、投光と組み合わせる方式はアクティブ手法である。

 フラッシュのような強い光を投光し、投光しない画像と比較して物体認識を行ったとすると、これが最も簡単なカメラのアクティブ手法となる。

 投光する光のビームを絞って、環境中にポイントが投光できると、三角測距の原理を使った投光ポイントの測距が可能となる。この考え方を拡張し、パターン光を投光して測距ポイントの点数を増やしたり、ポイントではなく対象物体の面情報を一度の撮像で得ることも可能である。ビームを絞ったりパターン光を投光するとき、通常の光よりもレーザ光の方がビームを絞れるため、レーザ光を用いることが多い。ただし、レーザ光を用いていても、これらの方式は通常LiDARとは明確に区別する。

 カメラを使っていても投光に依存するアクティブ方式では、電波レーダーやLiDAR同様に検出できる対象物体の制限がかかる。投光した光が反射しない物体面は検出できないからである。

 

| | コメント (0)

2020年11月29日 (日)

センサフュージョン(14)

 電波レーダーとLiDARが電磁波を送受信することに対し、超音波センサは音波を送受信するアクティブセンサである。そのため、電磁波に比べて、かなり短い測距離性能となる。

 電波レーダーやLiDARの測距性能が100~250mに対し、超音波センサは数mとなる。そのため、用途が駐車時の自車周辺監視用に限られる。

 また、走行中は風切り音がノイズになるため、低速時での使用に限られ駐車支援用が適していることになる。超音波センサ最大のメリットは、センサが低コストなことである。そのため、必要に応じて複数個使えることができる。例えば、自車両4隅、前後左右と監視エリアに応じた個数を配置可能である。超音波センサが監視するエリアが自車両周辺に限られることから、センサフュージョンは超音波センサ同士の組合せになることが多い。

 ただし、後退支援用のバックカメラは、後方障害物検知用の超音波センサと監視エリアや用途が同じため、これらのセンサフュージョンは可能である。同様に、アラウンドビューモニターと周辺監視用超音波センサのセンサフュージョンも考えられる。

 

| | コメント (0)

2020年11月28日 (土)

センサフュージョン(13)

 LiDARの場合は送信波はレーザ光なので、ほぼ人間が見ている物体からの反射波を受信することになる。LiDARのレーザ光は900nm帯域の光のため、厳密には人間の可視光帯域とは異なるものの、反射特性に大きな違いはない。

 LiDARは細いシャープな送信ビームを発するため空間解像度が高く、人間が反射データを見て直感的に理解できるポイントクラウドを提供することができる。それは、可視光領域で使用するカメラ映像を見て、映像データが容易に理解できることに近いといえる。

 空間解像度が高いポイントクラウドを生成できることが、LiDARの最大の利点である。高度自動運転にLiDARが使用される理由は、この特性を活用するためである。すなわち、LiDARは自車周囲の環境を空間解像度が高いポイントクラウドに変換し、そのポイントクラウドを用いて自車位置推定が行うことができる。自車位置推定はGPSを使うことが一般的なものの、ビル群の合間を走行するときのマルチパスの影響が厳しい箇所やトンネル内ではGPSの使用が困難なため、自車センサで自車位置推定できるのは非常にありがたい。

 空間解像度が高いがため、雨や雪の影響を受けてしまうことがLiDARの欠点である。これが、運転支援システムでの利用で電波レーダーが優勢な理由の一つといえる。

 

| | コメント (0)

2020年11月27日 (金)

センサフュージョン(12)

 アクティブセンサは、センサが発する電磁波や音波の特性によって、検出する対象が変わって来る。すなわち、ある波長の電磁波や音波を反射するものだけが検出対象ということである。

 電波レーダーであれば、反射率は対象物体の誘電率に依存する。そのため、誘電率が高い金属からは強い反射波があるため検出しやすく、金属以外のものは誘電率が落ちるため反射波が弱く検出しにくくなる。

 また、電波は光より波長が長いため、光は反射しても電波は回折する現象が起こりうる。78GHzだとミリ波帯域なので、ミリサイズの物は反射しないことになる。一般的に、電波レーダーはLiDARより空間解像度はよくないのだが、これは波長が長いためだけではない。LiDARの空間解像度が高いのは、送信ビームを細くシャープに絞れるためである。プレゼン用のレーザポインタは、多少距離が離れてもピンポイントでビームを送信することができる。ところが、電波レーダーの送信アンテナは10cm平方程のサイズで、送信波が絞れたとしてもこのアンテナサイズであることがわかる。

 送信ビームがどの程度絞れるかということも、アクティブセンサの一つの特徴である。太いビームが悪いというわけではなく、太いビームから強い反射波が得られればそれは確実に対象物体があるということになる。

 

| | コメント (0)

2020年11月26日 (木)

センサフュージョン(11)

 人間の感覚器官は全てパッシブセンサである。パッシブセンサとは、センシング対象が自ら発する情報を受信してセンシングするものである。

 もっとも、よく見ようと対象に近づいたり、触覚を確実にするため触るところを変えたりすると、アクティブセンサといえなくはない。しかし、人間が光や超音波を発して、その反射波からセンシングすることはない。

 パッシブセンサに対して、センサ側が発した電磁波や音波からの反射波を受信してセンシングするものがアクティブセンサである。蝙蝠が超音波を発して虫を捕食することが有名であり、これがアクティブセンシングの好例である。自動運転で用いる、カメラ、LiDAR、レーダー、超音波センサでは、カメラがパッシブ方式、LiDAR、レーダー、超音波センサがアクティブ方式である。

 人間を含めて、生物の外界センサは圧倒的にパッシブ方式が多い。なぜなら、アクティブ方式でセンシングすると、外敵に自らの位置を知らせることになるからである。

 

| | コメント (0)

2020年11月25日 (水)

センサフュージョン(10)

 人間がものを認識するとき、知覚表象としてものを認識している。寿司を見た視覚情報としての寿司、寿司の香りを嗅ぐ嗅覚情報としての寿司、寿司を味わった味覚情報としての寿司、これらが情報がまとまって一つの知覚表象として寿司を認識する。

 多くのセンサ情報でフュージョンされた情報は、単一のセンサ情報より多義的な情報を提供することができる。したがって、知覚表象は複数のセンサでも認識できるし、単一のセンサでも認識できる。

 人間のセンサデータフュージョンは感覚統合と呼ばれ、Albusは人間の脳の感覚統合を階層的感覚情報処理構造のモデルを提案した。これは、視覚、聴覚、嗅覚、味覚、触覚の各情報がモジュール構造をもち、かつ階層的にネットワークされ、知覚表象を形成するというものである。Albusのモデルは概念的なものなので、これをそのままセンサフュージョンのモデルにすることは難しい。しかし、センサフュージョンの工学モデルは、Albusのモデルを目指して具体化するとよいだろう。

 これは、センサデータを単に統合するだけではなく、人間の知覚表象が豊かな情報を提供することを参考にし、センサデータの組み合わただけでは得られない新たな情報の獲得を目指すということである。すなわち、ステレオカメラのように、2D情報の画像を組み合わせて3D情報を得るようなセンサフュージョンを開発しよう。

 

| | コメント (0)

2020年11月24日 (火)

センサフュージョン(9)

 人間が単体データで認識したとき、何か問題があるだろうか。写真の寿司を見て寿司と認識した場合、認識のリアリティには欠けている気がするだろう。

 視覚情報だけの認識なので、味覚、触覚、嗅覚が欠けるため、リアリティさが欠けるのである。大した問題ではなさそうだが、認識対象によっては問題が生じる。

 筆者が問題と思うのは、TVミーティング等のオンライン会議である。オンライン会議はリアリティに欠けるため記憶に残りにくかったり、オンライン授業で集中していると対面で受講する授業より疲れる。なぜかというと、会議や授業の受講はコミュニケーションの一種であり、五感を総動員して行うコミュニケーションが、視覚と聴覚だけで行うからである。コミュニケーションは視覚と聴覚だけでなく、そのときの空気(雰囲気)の温度感等も加味されてリアリティさと情報の精度を上げている。それに加えて、オンラインのモニタの解像度が低さや低品質の音声が、視覚と聴覚に十分働きかけていない。そのため、リアリティさに欠け、集中すれば疲れるのである。

 つまり、センサデータフュージョンで記憶しているものは、使用した全てのセンサで認識すべきということがいえる。センサが全て使えないときは、不十分な認識と思った方がよいだろう。

 

| | コメント (0)

2020年11月23日 (月)

センサフュージョン(8)

 センサデータフュージョンは特別なことではない。なぜなら、われわれ人間が普通に行っていることだからである。

 人間のセンサ器官は、眼、耳、鼻、舌、皮膚であり、これらの器官は、視覚、聴覚、嗅覚、味覚、触覚(温感を含)の各機能を持つ。われわれは、これら五感の情報を常にセンサデータフュージョンしている。

 例えば、トロをネタにした握り寿司を前にして、われわれ日本人は赤と白の視覚でこれを「トロの握り」と認識するだろう。同時に、トロの味わい、持ったときの触感、重さ、温感、匂いを思い出す。人によっては、寿司屋の威勢の良い歓迎音声も連想するだろう。つまり、寿司の記憶は、視覚、味覚、触覚、嗅覚、人によっては聴覚データをフュージョンした形で行われており、認識するときもフュージョンされたデータと比較されるのである。寿司に限らず、たいていのものはセンサ単体のデータではなく、いくつかのセンサデータがフュージョンされて記憶され、認識するときはフュージョンした形で行われる。

 もちろん、単体データでも認識は可能である。寿司の写真を見れば、当然視覚だけで寿司と認識することができる。

 

| | コメント (0)

2020年11月22日 (日)

センサフュージョン(7)

 電波レーダ単体でのセンサデータフュージョンは、LiDARのように反射強度を使う例は見られない。電波レーダの場合は、反射点の相対速度を使うことが一般的である。

 電波レーダから反射点の距離情報の抽出は、FMCWS方式によって行われる。FMCWSでは、送信波にチャープ波という時間と共に周波数が変わるものを使っている。

 周波数が上がる場合をチャープアップ、周波数が下がる場合をチャープダウンと呼び、電波レーダは常にチャープアップとチャープダウンを繰り返す送信波を出しているのである。周波数の波長が時間と共に変化すると、受信波もチャープ信号になっている。送信波と受信波で形成されるビート信号を解析して距離情報を計算するのだが、反射点はドップラーシフトの影響を受けることになる。従って、計算できるものが距離情報だけでなく、相対速度情報も計算できるのである。

 つまり、電波レーダは反射点の距離情報と相対速度情報を同時に出力することができ、これらデータを容易にセンサデータフュージョンすることができるのである。この特性は、前方を走行している車両に追従制御するACCと抜群に相性が良いことがわかる。

 

| | コメント (0)

2020年11月21日 (土)

センサフュージョン(6)

---

 LiDAR単体でのセンサデータフュージョンの例としては、車線認識が挙げられる。LiDARが投光するファンビームは、何層かに高さ方向に分かれているものがある。

 何層かのファンビームを投光するうち、最下層のファンビームを地面に投光したとしよう。すると、道路の両端の白線からの反射光強度は、道路面からの反射光強度より遥かに強いことがわかる。

 道路白線用の塗料は、ビーズ材料等が含まれており光を反射しやすくしている。これは、夜間に自動車のヘッドランプに照らされたとき、くっきりと白線が見えるようにしているからである。ヘッドランプの可視光が反射しやすい材料は、LiDARが使用する近赤外光も同様に反射しやすい。そのため、白線以外のアスファルト面より遥かに強い反射光が返って来るのである。よって、LiDARの3D情報のポイントクラウドデータに反射光強度を融合すれば、道路白線部分を検出することが可能となり、LiDARによって車線認識が可能になるのである。

 電波レーダは投光ビームがシャープではないので、LiDARのような車線認識は難しい。電波の反射率は反射物体の誘電率が影響し、金属面からは高い反射強度が得られるため、自動車やガードレール等の金属面かどうかは検出可能である。

 

| | コメント (0)

2020年11月20日 (金)

センサフュージョン(5)

 組み合わせるセンサが同じ場合でも違う場合でも、融合するものはそれぞれのデータである。したがって、センサ・フュージョンは正確にいうと、センサ・データ・フュージョンということになる。

 センサデータフュージョンという観点からセンサの組み合わせを考えると、一つのセンサでもセンサフュージョンが可能になることがわかる。

 すなわち、一つのセンサから異なった種類のデータが取り出せれば、それらデータを使えばセンサフュージョンが行えることになる。顕著な例としては、単眼カメラが挙げられる。カメラで得られた画像データは、各種画像処理を施すと様々なデータを得ることができる。画像データをディプラーニングで学習した結果から物体認識を行い、動画像処理をしてオプティカルフローを抽出し、これら両者のデータを融合すれば、所定の物体がどちらに動いているかがわかる。

 LiDARや電波レーダでも、センサから得られるデータは反射点の距離情報だけでなく、光や電波の反射強度も得ることができる。すると、反射率の事前知識を活用して、反射点の属性を推定することができる。

 

| | コメント (0)

2020年11月19日 (木)

センサフュージョン(4)

 違うセンサを複数使う場合は、同じセンサを使う場合よりさらに複雑になる。全く特性が異なるセンサから得られたデータは、そもそもデータの差異の比較するできないからである。

 例えば、単眼カメラから得られた画像データは画素毎の明るさデータの集まりであり、それにLiDARから得られた距離情報のポイントクラウドデータは単純に比較できるものではない。単眼カメラのデータは2Dデータであり、LiDARから得られたデータは3Dデータである。

 まずは、両者がどこを見ているかを明確にしなければならない。通常、違うセンサの場合はそれぞれのセンサの搭載位置が異なるため、両者の座標を校正し統合しなければならない。両者の座標を統合すれば、両者のデータは同じ対象物体の次元が異なる属性データと考えられるので、データを融合する意味と方法を考える意義が発生する。

 そのため、違うセンサを複数使う場合は、最重要項目は座標校正と座標統合といえる。もちろん、同じセンサを複数使う場合も座標校正は重要であり、ステレオカメラでは最重要項目である。

 

| | コメント (0)

2020年11月18日 (水)

センサフュージョン(3)

 同じセンサを複数使う場合は、それぞれのセンサから得られたデータの差異を比較することが基本となる。例えば、同じ単眼カメラを二つ使用するステレオカメラの場合を考えてみよう。

 ステレオカメラでそれぞれの単眼カメラに映っている物体の距離情報は、三角測量の原理で推定する。ステレオカメラの具体的な距離情報推定法は後に詳しく述べるとして、ここでは簡単に解説しておく。

 二つのカメラを水平左右にそれぞれの光軸を平行にして配置すると、カメラから物体までの距離に応じて左右のカメラに映る物体の位置が異なるため、それぞれの位置情報を決めれば距離情報に変換できるのである。カメラと物体まで距離があるときは、左右のカメラに映る物体位置はほぼ同じである。ところが、距離が近づくにつれ、左右のカメラに映る物体位置は、右のカメラはより右の方に、左のカメラはより左の方に変化していく。すなわち、左右のカメラの物体位置の差異を抽出し、距離情報に変換することになる。

 カメラに映る物体の位置といっても、画像認識をかけなければカメラの画像データのどこに物体が映っているかはわからない。そのため、実際の計算方法は、左右どちらかを基準にして、ある領域のデータが片側のどこに映っているかを探索することになる。

 

| | コメント (0)

2020年11月17日 (火)

センサフュージョン(2)

 カメラとLiDARを組み合わせなくても、ステレオカメラを使えば物体分離と属性認識、および距離情報の獲得まで可能である。ステレオカメラがあればセンサフュージョンは不要だろうか。

 実は、ステレオカメラ自体がセンサフュージョンなのである。ステレオカメラは二つの単眼カメラをフュージョンしているからである。

 ステレオカメラは、同じカメラを複数使うセンサフュージョンである。カメラとLiDARの組み合わせは、違うセンサを複数使うセンサフュージョンである。このように、ハードウエアレベルでセンサフュージョンは、

・同じセンサを複数使う場合
・違うセンサを複数使う場合

の2種類を考えることができる。同じセンサを使う場合と違うセンサを使う場合では、センサフュージョンの考え方から手法まで、全く異なることをすることになる。

 

| | コメント (0)

2020年11月16日 (月)

センサフュージョン(1)

 今日からしばらくは、センサフュージョンについて考えていこう。センサフュージョンとは簡単にいえば、複数のセンサを使ってトータルの性能を向上させようということである。

 もちろん、ここでいうセンサとは、われわれが苦心している自動運転用の外界センサのことである。すなわち、車載カメラ、LiDAR、電波レーダー、超音波センサ等の外界認識や距離計測用のセンサである。

 なぜセンサフュージョンのよに、複数のセンサの使用を考えないといけないかというと、一つのセンサでは性能が不十分だからである。自動運転が期待する外界センサ情報は、外界環境が個々の物体分離と属性、および見えている範囲の3D情報である。LiDARを使えば、SLAMができるほどなので見えている範囲の3D情報は得られそうである。しかし、物体分離や属性(信号機の色)は難しい。そのため、物体分離は属性の認識にはカメラを使って、LiDARでは不十分な点を補完しなければならない。

 これがセンサフュージョンの一例である。カメラで物体分離と属性認識を行い、LiDARでそれらの距離情報を得て相互補完しようという考え方である。

 

| | コメント (0)

2020年11月15日 (日)

Evaluation of automated driving communication tool by driving simulator (2)


Mareri is researching and developing a system that controls the light distribution of headlamps and projects marks onto the road. ​This is called a digital write processing unit. The headlamp is capable of not only finely controlling the light distribution according to the traveling condition, but also projecting the mark for warning the driver on the road surface. In order to precisely control the light distribution, the projection of 3 LEDs is composed of 1.3 million micromirrors, and the angle of the micromirrors is finely controlled to create the light distribution pattern. ​Since the light distribution is fine, the glare of oncoming cars and pedestrians is minimized compared with conventional headlamps. ​Of course, an external recognition sensor detects other vehicles and pedestrians and adjusts the light distribution to the most optimum possible from the navigation information and the position of the vehicle. ​Then, a guideline is indicated at a place where the road width becomes narrow, and a mark indicating the direction of a pedestrian is also displayed. ​The marks projected onto the road may be recognizable by pedestrians, although this may only be at night. ​So could the digital light processing unit be the new communication tool for vehicles and pedestrians?

​Marelli's digital light processing unit isn't the first to mark roads. ​It is still fresh in our memory that it was announced in 2015 in the Mercedes-Benz concept car F015. The F015 was the Level 5 fully automated driving vehicle with the tagline Luxury in Motion. ​It is a phrase that expresses the state of realizing an interior in which the front seat can rotate backward and face the rear seat because there is no need to drive. In order to communicate with pedestrians, F015 installed LED matrix display units on the front and back of the vehicle to display messages. ​When there is a pedestrian, a zebra zone showing the pedestrian crossing is drawn on the road by a visible light laser, and the pedestrian can be urged to walk. ​When the pedestrian swings his/her hand, he/she understands the gesture, draws a text, and communicates with the pedestrian. Mercedes-Benz hired a social psychologist to research and develop a system to communicate with the pedestrian.

It is needless to say that the driving simulator can be utilized for the evaluation of these communication tools described here.

 

| | コメント (0)

2020年11月14日 (土)

Evaluation of automated driving communication method by driving simulator (1)

​Next, let's look at the mix of manual and automated driving issue. ​What's wrong with the mix of manual and automated driving? How do you communicate with other drivers during normal driving? You can think of the blinker when you turn right and left at the intersection or change lanes, the passing when you want to pass, the thank you hazard, the horn, etc. These are also implemented in automated driving vehicles, and there are no particular problems. ​However, communication without using the vehicle's blinker, hazard, passing, or horn becomes a problem. ​That is, how the automated driving vehicle can show what the driver has shown to other traffic with gestures and facial expressions. ​Level 5 automated driving vehicles don't have drivers. ​Perhaps all the windows in the automated driving vehicle are smoked so you can't see inside. ​Under these circumstances, if the automated driving vehicle starts to slow in front of a pedestrian who wants to cross the road, the pedestrian might not know whether the vehicle want to tell the pedestrian to cross or just slow down for some reason.

​In the past, large trucks were equipped with speed indicators to inform pedestrians about the condition of the trucks in Japan, if not as a tool for communication between vehicles and pedestrians. ​This was a triplet of lamps mounted on the top of the truck windshield, and the number of lamps on the green varied with speed. The speed indicator lights are all turned off when the truck is stopped, and one light is automatically turned on at speeds of 40 km/h or less, two lights are automatically turned on at speeds of 40 km/h to 60 km/h or less, and three lights are automatically turned on at speeds of 60 km/h or more. ​Pedestrians can know the speed of the truck by looking at the number of the speed indicator lights of the truck coming from a distance. It was useful as a primitive form of vehicle-pedestrian communication tool. ​However, it was abolished in 1999 because it would become an import barrier. ​The speed indicator was unique to Japan. ​It was a victim of the automobile export war. ​Even imported trucks could be approved if they were modified and equipped with speed indicator lamps, but they were regarded as import barriers because of the cost of modification. ​In addition, some pedestrians who did not have the driver's license did not even know the meaning or existence of the speed indicator lights, and the fact that it did not function well was a remote cause of the abolition.

​Such communication tools with pedestrians and other drivers are necessary for driverless automated driving vehicles. ​At the very least, you'll need something to show that it's the automated driving vehicle so it can be recognized correctly by other traffic. As a means of communication with pedestrians and other vehicles other than blinker and passing, it is possible to attach an electric bulletin board facing outside the vehicle. ​On-board electronic bulletin boards are often used by public vehicles for traffic regulations and construction guidance. Electric bulletin boards are sold not only in public vehicles but also in stores, where messages such as "Thank you." and "After you." are displayed or printed in flowing characters. ​However, under the current Road Traffic Law, it is necessary to be careful when lighting other than the lights, and strictly speaking, lighting while driving may violate the law in some contries. In the inexpensive one, LEDs can be arranged in a matrix, dot characters can be selected to store several patterns, and messages can be displayed depending on the situation. ​Adding speed indicator light to this and providing the message for every situation could be the tool for communicating with the outside of the automated driving vehicle. However, it takes longer to read the message than it does to turn on the blinker or pass the ball, and it can't be identified from a distance. ​Therefore, it is necessary to have a means of communication that can be understood instantly.

​Communication between the vehicle and the pedestrian is done by eye contact and gesture. ​This type of non-verbal communication is faster and more reliable than messages. What kind of system do you have in mind for the automated driving vehicle to do the same? ​For an external display, it may be necessary to use a large display for conveying pictograms and personified pictures in an easy-to-understand manner, rather than the display for text. Alternatively, a number of lighting devices may be set on the vehicle roof so that they can be viewed at 360 ° and from a distance, such as speed indicator lights used in heavy-duty trucks in Japan, and the lighting pattern may be changed depending on the state of automated driving. ​This requires standardization and regulation around the world, but there are no language barriers and children can understand it. Mercedes-Benz is actually researching and developing a way to communicate between automated driving vehicles and pedestrians. ​Here's how Mercedes-Benz recommends communicating between autoamted driving vehicles and pedestrians. ​This is a lighting device that is under development in a concept vehicle called a cooperative car, indicating that it is operating automatically for approaching pedestrians. A light arranged on the roof of the vehicle indicates that the vehicle is the automated driving vehicle, and the movement of the vehicle is displayed by the lighting method of the lamp. ​Lights located on the windshield, front grille, headlamp, mirror, lower part of side window, etc., positioned so as to be visible from any angle of 360 degrees, indicate the driving mode of autonomous driving and the lamp located on the roof indicates the movement of the vehicle (Slow flashing indicates deceleration, fast flashing indicates approaching pedestrians, etc.). Mercedes says these represent the vehicle's own will and can communicate with pedestrians. ​The lamp on the roof emits turquoise blue as a color which gives a sense of security to pedestrians by examining each color. ​Flashes of red or yellow, like warning lights, may cause uneasiness among pedestrians and hinder the spread of automated driving, so colors that provide the sense of security are a well-considered choice.

 

| | コメント (0)

2020年11月13日 (金)

Evaluation of automated driving ethics by driving simulator (3)

We found that is is possible to consider the program which expresses the ethics. ​However, initial program values and constants must be collected experimentally. The best way to experiment is to collect driving data in the driving simulator with the help of a person with a typical sense of ethics. ​In this method, drivers are asked to make a decision on whether to go straight or change lanes by causing a truck problem on the driving simulator.

​It's a good idea to experiment with two different patterns, one for manual driving with broken brakes, and the other for the automated driving vehicle. ​Also, independent of the driving simulator experiment, it is necessary to take a questionnaire just for judging the situation like the moral machine and check whether it is the same tendency as the result of the driving simulator. ​This is because there is no time to think carefully in actual situations, and the presence or absence of a difference between immediate judgment and thoughtful judgment is also important data.

​Also, in the case of a learning type programu, learning should be started not only in extreme situations such as the trolley problem, but also in normal use. ​Therefore, from the status of compliance with the normal traffic rules of one's own vehicle to the status of travel of another vehicle, the status of the environment in which the vehicle is normally traveling is also used for parameter learning of the program.

​Level 5 automated driving vehicles don't have steering or pedals because they aren't supposed to be driven by drivers. ​This means that it is not possible to learn the driver's ethics during normal driving. For this reason, it would be better to start from the point where the Level 1 driver actively controls the vehicle, learn the ethics during normal driving in the Level 1 usage situation, and gradually raise the level of automated driving, instead of suddenly changing to Level 5. ​Of course, there's also the idea that a level 5 automated driving vehicle will inherit the values it learned when it was developed. ​However, ethics can vary slightly depending on the user and the area in which they use it, so it is better to have a mechanism that allows them to always learn. ​Therefore, the driving simulator is utilized.

​Thus, the automated driving vehicle that is not controlled by the driver should have a mechanism to monitor the ethical satisfaction of its occupants during normal use. ​This means that the system is needed to detect changes in physiological indicators of occupants without contact.

 

| | コメント (0)

2020年11月12日 (木)

Evaluation of automated driving ethics by driving simulator (2)

Now let's consider how to program the automated driving ethic. ​Drivers should have a clear sense of ethics, and the automated driving ethic should be the same. The simple one is to set the rule of which to choose as the production rule, and select logically by if-then-else. ​Changing the production rules of the database by destination would allow for a variety of specifications. However, this approach requires that all confronting situations be embedded in the database. ​The actual situation may be ambiguous or unexpected. ​As a result, automated driving decisions are quickly broken in situations that if-then-else cannot determine. ​In order to deal with situations that don't fit in the database or are unexpected, automated driving vehicles will need to learn about the real world. ​That would allow automated driving vehicles to judge a wide range of situations.

Before proceeding with the development of the program, let's see what the society thinks. ​First of all, the three famous principles of robotics. Automated driving vehicles think and act on their own based on information obtained through sensing. ​In other words, since they can be called robots, they refer to the three principles of robotics. The Three Principles of Robotics were proposed in the work of science fiction writer Isaac Asimov. ​In the 1950 "I, Robot", it is described as follows in the 2058 "robotics handbook", 56 edition.

​Article1 : Robots must not harm humans. ​In addition, they shall not cause harm to human beings by overlooking such danger.
​Article2 : Robots must obey orders given to humans. ​Provided, however, that this shall not apply where the order issued contravenes Article 1.
​Article3 : The robot must protect itself unless it is likely to violate Articles 1 and 2 above.

​Suppose that the driver is being carried by the robot following these three principles. ​How will the robot behave when the trolley problem occurs? Let's see how the automated driving vehicle that follows the three principles of robotics behaves in the trolley problem. ​When the driver is not in the vehicle, the driver should change the lane in accordance with Article 1 and cause a self-injury accident in the situation where the driver is a pedestrian in the straight line and the driver is the obstacle in the lane change. If there were five people going straight and one person changing lanes, they would try to change lanes to minimize the number of victims. ​However, it turns out that the three robot principles alone are not enough because they are against Article 1. What would it do if it had a driver and one person going straight and the obstacle changing lanes? ​Going straight is against Article 1, and changing lanes is also against Article 1 because it harms the driver. ​Even if the driver instructs the driver to go straight ahead, it is against Article 1, but even if the driver changes lanes, it is against Article 1 and cannot make a decision. ​Again, the three principles of robotics are not enough. ​New rules are needed to ensure that decisions are made in accordance with ethical standards.

​Therefore, IEEE(Institute of Electrical and Electronics Engineers) has established the Moral Conscious Design Initiative Guidelines (Ethically Aligned Design). ​This is being revised, taking into account the opinions of practitioners and researchers. The IEEE Moral Conscious Design Initiative Guidelines target artificial intelligence AI and autonomous systems as AS(autonomous systems). ​Autonomous driving will be powered by AI, but it will be autonomous. AI and AS are collectively called A/IS. ​As the major principles of A/IS, the following three points are listed.

​1.Universal human values: A/IS can be a huge force for good in society if it is designed to respect human rights, harmonize with human values, increase happiness overall, and protect us. ​This value is necessary not only for engineers but also for policymakers and should benefit everyone, not for the benefit of a group or a single country.

​2.Political Decision and Data Handling: When properly designed and implemented, A/IS has great potential to foster culturally appropriate political freedom and democracy. ​These systems improve political efficiency, but digital data should be protected in a verifiable manner.

​3.Technical Dependency: A/IS should provide reliable services, and trust means that A/IS can enhance human-driven value and securely and actively achieve its designed objectives. ​The technology should monitor that its operation is consistent with human values and meets the prescribed ethical objectives of respecting rights. ​In addition, the verification and verification process should be developed to be auditable and authoritative.

​The IEEE Moral Conscious Design Initiative Guidelines set out the following four issues to be considered in order to realize the three principles.

​1.Human Rights Framing: How can A/IS ensure that human rights are not invaded?
​2.Accountability Principle Framing: How can A/IS ensure that A/IS is accountable?
​3.Transparency Principles: How can A/IS ensure the transparency of A/IS?
​4.Educational Framing: How can A/IS maximize the benefits of A/IS technology and minimize the risk of misuse?

This means that the automated driving vehicle ethic program (system) must be framed so as not to invade human rights, that the decision-making process can be output ex post facto, and that it must be developed to have the fewest problems in the region at the time of the ex post facto verification. In addition, the following three points should be given priority as the implementation principle within the development team of automated driving.

​1.Identify the norms and values of each system affected by the A/IS.
​2.The norms and values of the system are implemented in A/IS.
​3.Assess the coordination and collaterality of norms and values between people and A/IS within each system and development team.

​In order to develop the program that reflects ethics in automated driving in accordance with the IEEE Moral Conscious Design Initiative Guidelines, The author suggest, for example, the following:.

​1.In order not to invade human rights, when the automated driving vehicle is no driver, priority is given to life other than that of the automated driving vehicle.
​ (Pets if going straight, obstacles if changing a lane, choose to collide with the obstacles.)
​2.If the automated driving vehicle occupants are present, priority should be given to occupant safety.
​ (Pedestrians if going straight, obstacles if changing the lane, the automated driving vehicle will not change the lane. In case of this, the regulations shall be established in advance.)
​3.When damage is inflicted on persons other than crew members, judgment shall be made in accordance with the ethics of the country or region where the automated vehicles are used.

​This is just an example, and the contents are not the author's taste or idea, and there are no references. ​The idea is to place these basic principles on the premise of program development. Then, what kind of program should be made to conform to the ethics of the country/region in the item 3?

 

| | コメント (0)

2020年11月11日 (水)

Evaluation of automated driving ethics by driving simulator (1)

​There are five challenges to achieving Level 5 automated driving: handling sensor performance inadequacies, take-over, mixing manual and automated driving, ​liability in the event of an accident, and ethics. ​Among these, in addition to the take-over described so far, we consider the ethical issue and the problem of the mixture of manual and automated driving. ​Let's start with ethical issues.

​The problem with the ethics of automated driving is the case where the trolley problem is applied to automated driving. ​That is, in a situation where the brake of the automated driving vehicle is broken and cannot stop suddenly, there is one pedestrian in the driving lane and another pedestrian in the other lane. ​Automated driving vehicles will either turn to the roadside to avoid contact with pedestrian groups and collide with the roadside zone, or they will keep driving and crash into the pedestrian. ​In this ultimate situation, the automated driving can be programmed to choose. ​This is not a matter of developer preference. ​What should we base our decision on, not my taste?

​In response to this problem, the Massachusetts Institute of Technology (MIT) Media Lab launched a project called Moral Machine. ​It collects people's judgments by answering questionnaires about the various automated driving trolley problems on the Moral Machine site. There are a lot of questions on the moral machine, and you can post new situations. ​The questions are based on the assumption that the automated driving vehicle with the broken brake will be damaged by going straight or changing lanes on a two-lane road, and the choice between going straight or changing lanes is made in the following situations for examples.

​・If you go straight, five pedestrians will die, and if you change lanes, you'll hit an obstacle on the road and one driver will die.

​・The automated driving vehicle is being driven by a pregnant woman and a child who is just three years old, and the number of pedestrians who thought there were five is actually four people and one dog, two of whom are criminals, one elderly and one homeless. ​In addition, everyone may cross the street ignoring the red traffic light.

​There are a number of other settings. ​In the above questions, you might choose lane change in the first setup, but what percentage of the next question would you choose lane change? When you answer the moral machine question, it shows you your choices and the results of your average choices. ​The author's answers were close to the average, but some of the questions were noticeably off.​ In other words, the moral machine has no right answer. ​So far, 40 million people around the world have tended to answer questions, but the trends vary by region and culture.

​The parameters of choice aree the difference between survivors and victims, sex, age, human or animal, body shape, social status, passenger or pedestrian, compliance with traffic rules, and whether to change course. ​Among these parameters, a common trend among respondents, regardless of nationality, age, or religion, is to prefer humans to animals, large numbers to small numbers, and young people to the elderly. ​It also tends to give priority to compliance with social status and traffic rules. ​However, there is no consistent trend for other parameters. However, there are regional trends. For example, in Finland, where the income gap is small, passengers or pedestrians, or social status is not a determining factor. ​In Central and South America, however, many felt that homeless people and criminals do not have to be helped. ​Japan's trend is slightly different from that of the rest of the world. ​That is, they do not value the number of lives saved. ​In other words, they tend to focus on who to help rather than the number. ​They also tend not to change their course.

​In addition, the tendency to assist pedestrians rather than passengers is most prominent in the world. ​By comparison, Norway is second and Singapore third in this trend, while China tends to help its crews rather than pedestrians. France tends to focus on the number of survivors and to help younger people more than older people. ​In Taiwan, China and South Korea, however, there is a strong tendency to help the elderly. ​These results show that there is no solution that the world of the trolley problem can accept. ​But because most of Moral Machine's respondents are young men, the results of a survey across all generations may change.

Let's assume that automated driving vehicles have a fair judgment in the trolley problem of minimizing the number of victims. ​If the victims include drivers, would you like to ride the automated driving vehicle? The specifications of automobiles are basically uniform worldwide. ​Automated driving vehicles will not change this basic principle. However, the driver's ethics are not the same in countries and regions around the world, according to the results of the Moral Machine. ​Is it all right to change the specifications depending on the destination so that different ethics are applied to different parts of the world?

One of the factors for changing the specifications is the traffic infrastructure situation in the area where it is used. ​For example, suppose an automobile lane is one lane away from each other, and there is an area with a bicycle lane next to the automobile lane. ​In order to avoid the risk of contact with oncoming vehicles, the vehicle is programmed to travel a little further from the oncoming lane and a little closer to the bike lane. ​This reduces the risk of oncoming vehicle contact, but increases the risk of bicycle contact. ​That means there will be a statistically significant increase in crashes between automated driving vehicles and bicycles in the region. ​In this way, changing the specification of automated driving will change the type of accident. ​For this reason, the author thinks it would be better to standardize the specifications of automated driving, which is related to accidents.

​With regard to ethics, it is not possible to unify ethics itself, so why not unify the process of reaching a conclusion? ​In other words, research and development of programs that reflect the preferences of local users in each country by the same mechanism, while maintaining the principles.

| | コメント (0)

2020年11月10日 (火)

Take-over evaluation by driving simulator (7)

​In our laboratory, we are also continuing research on the problem of driver wakefulness lowering, not from the standpoint of warning after wakefulness decreases, but from the standpoint of how to prevent wakefulness from decreasing. ​Automated driving frees the driver from the heavy task of driving, so when a new task is imposed so as not to lower the arousal level, the driver is selected with the least load.

​There are a number of tasks that don't reduce alertness if the load on the driver is ignored. ​Active conversation and singing do not reduce alertness, but it does burden drivers.  The tasks we tested in our lab that did not reduce alertness included gripping the steering wheel, providing information based on where the driver is driving, flashing LEDs based on breathing rate during high alertness, and flashing LEDs that trigger saccade stimulation. ​Of these, steering wheel gripping and saccade induction were particularly effective. ​If the driver holds the steering wheel, he/she will not be able to lower his/her alertness from his/her driving habits. ​Saccade induction seems to be effective due to the biological characteristics of humans, but there is a problem whether it can be performed voluntarily.

​The fact that the steering wheel grip is effective for preventing the lowering of the arousal is a method in which the automated driving level 1 is extended and cost is not required. ​Even when the level of automated driving has evolved from 1 to 2 or 3, the most effective means for take-over is to have the driver maintain the driving posture and hold the steering wheel.

​On the other hand, the problem on the side of the vehicle which cannot support the take-over is to suddenly shift the control to manual driving operation after the take-over request. ​In other words, this means that even if the driving control is suddenly moved to the driver while the vehicle is running, a problem arises because the driver is not ready for the driving operation. In the planned take-over, it is assumed that the take-over request is issued only on the straight line. ​This is because the driver will not be able to cope with the problem unless the line is straight.

​As described above, most of the drivers operated the steering wheel smoothly when changing the lane immediately after completing the take-over on the straight track and when changing the lane after driving on the same lane for a while after completing the take-over. ​This indicates that the manual driving operation of the driver requires a familiarization drive. ​There is also another study in which the steering wheel operation is disturbed when the vehicle is taken-over while driving on the curve. ​In view of this, it may be preferable to continue level 1 vehicle control such as lane keeping instead of stopping vehicle control at the time of take-over. ​However, it is necessary to verify whether lane keeping is working and there is no problem when quick steering is necessary to avoid obstacles.

We have found a number of challenges in take-overs. ​Switching from automated to manual driving operation cannot be done easily by determining specifications. ​Conversely, what about switching from manual driving operation to automated driving? The author has experienced this in the driving support system, and so there is no problem. With ACC, the driver simply turns on the set switch and the he/sher can safely leave the pedal. ​The same goes for switching to automated driving, where the driver can turn it on at any time. ​If you are worried that autonomated driving might require a special HMI, try it out on a driving simulator, as we did in the take-over experiments. ​In driving support systems and automated driving, the driver entrusts the driving operation to the automobile, which greatly increases the utilization of the driving simulator. ​When the driver drives the vehicle manually, the behavior of the vehicle in response to steering wheel operation and the feeling that the behavior of the vehicle imparts to the driver through the steering wheel are strictly required. ​However, in the case of the advamced driver assistance systems, the driver does not have a severe sense of feeling required to the vehicle through the steering wheel, and therefore, if the driving simulator pays attention to the graphics representing the driving environment, the generation of G may not be as accurate as during manual driving. ​Therefore, even low-cost driving simulators can be fully utilized.

 

| | コメント (0)

2020年11月 9日 (月)

Take-over evaluation by driving simulator (6)

A total of 14 collaborators randomly selected subtasks i) through vi) for each experiment in the case of TTC of 7 seconds and 5 seconds, and the take-over was confirmed for all experiments. ​The way to handle obstacles was basically avoided by steering. As expected, the behavior of the collaborators from the time of the take-over request was as follows: they immediately looked ahead from the direction they were looking at in order to perform the subtask; at the same time, if they released the steering wheel, they grasped the steering wheel with both hands; they understood the situation and shifted to the avoidance operation.

All 14 persons avoided obstacles by steering without stopping, but the pedal operation was different by the driver. ​When a take-over request was issued, the brake was not applied and only the accelerator was turned off, so 8 out of 14 persons operated the steering wheel while depressing the accelerator pedal and maintaining the vehicle speed. ​On the other hand, 6 out of 14 persons tried to restore the vehicle speed by first depressing the brake pedal and operating the steering wheel while decelerating, and immediately depressing the pedal to the accelerator. In the end, none of them stopped the vehicle completely, and all of them continued to drive the vehicle manually by performing evasive maneuvers using the steering wheel. ​In order to confirm whether this behavior is invariant or not, we have collected data from many collaborators and will present the results later.

​When looking at the steering wheel operation to avoid obstacles in the case of TTC 7 seconds and 5 seconds, the smoothness was different. ​That is, the steering wheel was operated smoothly at TTC 7 seconds, but steeply at TTC 5 seconds. This is probably because the driver responded calmly because there was enough distance to the obstacle at TTC 7 seconds. ​On the other hand, at the time of TTTC 5 seconds, it is considered that the driver hurriedly operated the steering wheel because there was not enough distance to the obstacle. When the distribution at the start of steering operation of 14 persons was examined, TTC of 7 seconds was more dispersed than TTC of 5 seconds. ​It is considered probable that the avoidance operation was performed at the timing of their preference because there was a margin at TTC 7 seconds. ​However, at TTC 5 seconds, there was not enough time to start the avoidance operation immediately, so the distribution did not vary. ​For all 14 persons, the above differences in TTC 7 seconds and 5 seconds steering were common.

After the experiment, many of the collaborators said that they felt impatient because there was not enough time for 5 seconds of TTC. ​However, when the TTC was later tested at 3.6 seconds, all of them avoided obstacles, so the TTC limit was 3.6 seconds or less. Since the take-over was confirmed even at TTC 3.6 seconds, it is unlikely that the take-over request should be submitted at least 3.6 seconds before the obstacle. ​This is because the driver feels insecure when the TTC is tight.

At TTC 7 seconds, all the drivers visually confirmed that there was no vehicle diagonally behind by looking at the right door mirror when changing lanes to avoid obstacles. ​However, when the TTC reached 5 seconds, all of them changed lanes instantly, and they said that they could not check the following cars with the door mirrors. However, looking at the facial expressions of the drivers recorded during the experiment, it was found that everyone was looking at the right door mirror when changing lanes. ​In other words, the driver was unconsciously looking at the door mirror when changing lanes, and he/she could not remember his/her movements because of his/her anxiety. ​Just because the TTC is physically able to avoid obstacles, it's a problem that it forces the driver to operate in ways that make it uncomfortable. ​Therefore, it is recommended that the TTC should be at least 7 seconds with a margin.

However, this experiment was a special situation in which the vehicle ran alone on the course. ​What happens when other vehicles are running together?​ Next, as in the previous experiment, we conducted a test on a driving simulator to see whether the take-over was possible in mixed traffic. ​The TTC is 7 seconds, and the overtaking vehicle approaches from diagonally behind after the take-over request is presented. The timing of the appearance of the passing vehicle, whether or not it is moving, is changed for each experiment to prevent the driver from learning the experiment scenario. ​The number of collaborators in the experiment was 14, the same as in the previous experiment. Of the 14 participants, 13 successfully took-over under various tasks even in the mixed traffic environments. ​However, there was a case in which a driver caused a contact accident only once in several experiments. ​It was an accident in which a passing vehicle diagonally behind was found late, an avoidance operation was impossible, and a deceleration was insufficient, and the vehicle came into contact with an obstacle. ​It is sufficient to show that there are cases in which the take-over did not success, although a total of 100 experiments have only 1 occurrence. ​In other words, the take-over cannot always be established even when there is a margin of 7 seconds in TTC.

​By the way. ​In this experiment, a comparison was made between a case in which the following vehicle is confirmed by a normal door mirror and a case in which a monitor with a liquid crystal display as a substitute for the left and right door mirrors is set on both sides of the meter. ​As a result, in the case of the liquid crystal display monitor, since the vehicle enters the field of view without moving the line of sight like a door mirror, there is no delay in finding the passing vehicle diagonally rearward, and the take-over has been achieved in all the attempts.

​ In the above experiment, when the driver avoided the obstacle, all the drivers finally avoided the obstacle by steering wheel operation. ​This may be because many drivers had little driving experience. Therefore, 32 drivers with more than 20 years of driving experience participated in these driving simulator experiments. ​At TTC 7 seconds, no subtasks were performed and the driver was left to decide what to do during automated driving. Of the 32 passengers, 62% (21 persons) noticed the take-over request, decelerated first with the brake, and then changed lanes by steering to avoid obstacles while accelerating again. ​19% (6 persons) avoided by the steering wheel operation, while the speed was maintained without the brake operation. ​16% (5 persons) decelerated by the brake and stopped completely just before the obstacle. ​The result was that 1 person (3%) stopped after steering to avoid obstacles while decelerating with the brake. ​Therefore, 81% (27 persons) avoided obstacles by steering. ​The number of people who stopped by braking while going straight is 16%, so it can be said that this is a minority. ​As the driving experience increased, it was confirmed that the number of people who stopped without steering increased. ​But that's a minority, and if we take-over while we are driving, the majority will be those who try to keep moving and deal with the situation.

In the above experiment, it was found that a problem may occur when the take-over is performed while the vehicle is running. ​It can be seen that this problem has a problem on the driver side and a problem on the vehicle side. The problem on the driver side is whether or not the driver is in a state in which take-over is immediately possible when the take-over request is issued. ​The problem on the vehicle side is that take-over cannot be supported. Whether or not the driver can take-over is whether or not a forward monitoring can be performed immediately upon the take-over request, the situation can be recognized, and the smooth transition to operation can be made. ​That is, the driver has sufficient alertness. ​Therefore, the necessity of the driver monitor is discussed in order to determine the arousal level of the driver at the time of the take-over request or before the request. ​Should the driver monitor be installed, the driver be constantly monitored during automated driving, and if a decrease in alertness is detected, the warning or stimulus should be given to restore alertness? It would be nice if the driver monitor could accurately monitor the driver's wakefulness, but what if the warning is issued when the wakefulness is not lowered by mistake? ​It's not a serious mistake, but if the driver is enjoying a comfortable automated driving experience, he/she might find the warning offensive.

 

| | コメント (0)

2020年11月 8日 (日)

Take-over evaluation by driving simulator (5)

In our laboratory, we have conducted several experiments on the behavior of the driver during the emergent take-over using a driving simulator. Here are some of the highlights.

The first consideration was whether the emergent take-over would be feasible. The first thing we had to think about was how many seconds before we could issue a take-over request. The take-over request occurs when an automated driving sensor detects an obstacle that cannot be handled by the automated driving system alone during automated driving at level 2, issues the take-over request to the driver, and immediately cancels the automated driving. Take-over requests are issued with a high frequency beep sound that is clearly distinguisable to the driver. If the detection distance of the automated driving sensor is 200 m and the obstacle is a stationary object, the vehicle speed is 27.8 m/s (100km/h), so 200 / 27.8 ≈ 7 seconds, and the take-over request can be issued at the point of 7 seconds to the obstacle. A 140 meter sensor can detect 5 seconds before, while a 100 meter sensor can detect 3.6 seconds before.

Based on the above settings, an experiment was conducted to determine whether take-over requests can be made with 3 types of TTC: 7 seconds, 5 seconds, and 3.6 seconds. The position of the obstacle causing the take-over request in this experiment was changed every time the experiment was performed so that the position was not expected by the driver. The test was conducted on a highway with two lanes, a driving lane and a passing lane.

An experimeter told the driver that it was automated driving until the take-over request was beeped, and told a driver to determine the reason for the request and act accordingly. The experimenter did not mention changing a lane with the steering wheel or slowing down with the brake pedal. The driver was also asked for one of the following six tasks until the take-over request was issued.

i) Place both hands on the steering wheel and watch ahead. The foot position is free.
ii) Release both hands from the steering wheel and the line of sight was free. The foot position is free.
iii) Place both hands on the steering wheel and watch the video on the navigation screen.
iv) Release both hands from the steering wheel and watch the video on the navigation screen.
v) Put one hand on the steering wheel and perform an additional task.
vi) Release both hands from the steering wheel and perform the additional task.

The navigation screen was installed at the lower part of the center of the instrument panel, and the forward view could not be seen when the navigation screen was closely observed. The additional task is a task of inputting many four-digit numbers presented next to the navigation screen.

Before we get to the experimental results, let's assume the response of the driver from the take-over request to the take-over. However, it is more important to estimate the process by which the results are obtained and compare them with the actual results than to obtain the experimental results. When the take-over request is issued, if the driver is not looking ahead, it is likely that the driver will immediately turn his/her face forward and turn his/her attention forward. The driver will try to understand the situation to find out why the take-over request was issued.

Then, once the driver understand the situation, he/she should choose the action appropriate for the situation and move on to the action such as steering or braking. When such the action is carried out, the take-over of the driving task is completed, and the vehicle enters a so-called manual driving loop in the driving feedback loop. In the case of automated driving, the driver is not involved in the operation at all, so it can be said that the vehicle is outside the driving which is out of the feedback loop. That is, after the take-over request, the driver can be thought of as returning to manual driving operation, understanding the situation, and starting the manual driving operation to complete the driving feedback loop by manual operation.

If the above actions were completed within TTC seconds, the take-over would have been established.

 

| | コメント (0)

2020年11月 7日 (土)

Take-over evaluation by driving simulator (4)

After all, automated driving levels 2 and 3 are automated driving systems that can't doze off. Furthermore, it's not good to lose warning while driving. This is because when the level of warning decreases, peripheral surveillance becomes less effective at Level 2, and emergent take-over requests cannot be met at Level 3. In some cases, a planned take-over is carried out in response to an emergent take-over.

Planned take-over refers to a type of take-over that appears in systems that cannot operate automatically on general roads but can operate automatically only on expressways. That is, the exit point of the expressway and the entrance point to the drive-in are known, and the take-over request from the automated drivingn to the manual driving is made in front of them. In this case, since the take-over request is issued with a sufficient margin, it can be expected to return until manual driving can be performed after the take-over request even if the arousal level is lowered. In addition, since the take-over request can be sent to the driver gradually rather than suddenly, the driver's consciousness will gradually increase toward driving.

When executing the planned take-over, the point at which the take-over request is issued is important. The point should be a straight line. This is because the driver does not perform the driving operation until the take-over, so it is necessary to get used to the driving operation. To get used to driving, it's better to drive straight line first.

In a driving simulator experiment conducted in this laboratory in the past, steering wheel operation for lane change was smoother as the vehicle ran on a straight line for a longer period of time in a lane change situation after take-over at the straight line. This indicates that it takes time to get used to driving. Therefore, the take-over request selects a point where the straight line continues for a long time. Automated driving vehicles have digital maps, so we don't have to worry about picking a spot.

However, in an emergent take-over, there is no point to choose. In other words, the point where the take-over request is issued is not necessarily the long straight line. What happens when the emergent take-over is requested at a curve? There are curves on expressways, and there are curves at points with slopes.

In experiments with the driving simulator to see what happens when the driver take-over on the curve, we see that there is a problem. Most drivers wobble after the take-over. The difference between automated driving and driver driving, which was not felt when driving straight, can be felt at curves. The driving style of the curve shows the experience and preference of the driver, and it is unique when entering the curve, during the curve, and at the exit of the curve. For example, at a point where the vehicle enters a curve from a straight line portion, the driver may feel a sense of incongruity in the speed of automated driving and steering control. Therefore, if the driver take-over in the middle of the curve, the driver may feel unsteady because he/she tries to change to his/her own driving line. The emergent take-over seems problematic. It's not just when the take-over occurs at the curve.

 

| | コメント (0)

2020年11月 6日 (金)

Take-over evaluation by driving simulator (3)

At level 2, the automated driving controls the operation, so that the vehicle can be operated by hand-off driving. However, if the driver is hand-off state, concentration of monitoring around the vehicle decreases as described above.

For this reason, in most of the level 2 equivalent automated driving currently marketed, automated driving does not work unless the driver grips the steering wheel. If the driver hold the steering wheel, the driver can naturally focus on driving and on monitoring the surroundings, even if the automated driving is operating the steering and pedals.

Holding the steering wheel makes it the same as Level 1's Keep Lane Assist. Keep lane assist generally doesn't work unless the driver hold the steering wheel. Keep Lane Assist does not involve pedaling, but when combined with Level 1 Adaptive Cruise Control, it is the same as Level 2. In other words, if the driver should hold the steering wheel at level 2, it is not different from level 1.

When the driver hold the steering wheel, it is easy to take-over, and there is no problem because it concentrates on monitoring the surroundings. However, it is no longer meaningful to distinguish Level 1 from Level 2, and the significance of existence of Level 2 is lowered.

As long as the driver hold the steering wheel, level 2 and level 1 are not much different. So what about Level 3? At level 3, in addition to the operation control being performed by the automated driving side, the peripheral monitoring is also performed by the automated driving. Therefore, the driver has no monitoring obligation.

Since there is no peripheral monitoring obligation, the driver can do anything while driving in the automated driving mode. The driver can enjoy a DVD movie on the navigation screen, or he/she can watch it on his/her smartphone. But can he/she off? A take-over request may also occur for Level 3 from automated driving. That is, when a situation is encountered or a failure occurs in which the automated driving system cannot respond. At this time, if the driver is dozing off, it is impossible that the driver can smoothly take-over.

Therefore, mounting of a driver monitor is also considered. Unlike Level 2, the system does not warn the degree of concentration of monitoring around the area, so the warning for looking aside disappears, but the warning must be issued when the driver falls asleep.

In fact, a number of doze warnings have been researched and developed and commercialized. However, private vehicles with doze warnings are not common. The general idea is not to drive when the driver feels sleepy. If the driver feels sleepy, the usual way of thinking is to take a rest and take a nap instead of driving.

Also, is it effective to detect a doze and give the warning? Although a temporary effect can be expected, it is difficult to prevent falling asleep on a continuous basis. Therefore, after the driver monitor mounted on the level 3 automated driving detects the decrease in the driver's arousal level and warns the driver to raise the arousal level once, the automated driving should be canceled. In other words, the driver can't doze off at level 3. If the driver feels sleepy, he/she has to rest at a drive-in.

 

| | コメント (0)

2020年11月 5日 (木)

Take-over evaluation by driving simulator (2)

What happens if the automated driving system requests a take-over when the driver is doing other tasks than monitoring the surroundings? In such a situation, a smooth take-over cannot be expected, and the automated driving originally for improving safety may cause a dangerous situation.

In Level 2, although the operation control is performed on the automated driving side and the peripheral monitoring is performed on the driver side, the automated driving side also performs the peripheral monitoring to actually perform the operation control. Therefore, even if the driver neglects the peripheral monitoring, appropriate operation control is performed in most cases.

Therefore, there is a concern about the occurrence of a driver who misunderstands the specification of Level 2 as not requiring peripheral monitoring. In addition, in a situation where operation control is not required, there is a concern that the operator may not be able to concentrate on monitoring the surroundings even if the driver is obliged to monitor the surroundings. As a matter of course, if the driver himself/herself is driving, he/she cannot drive without monitoring the surroundings, so the driver concentrates on monitoring the surroundings. We all know that if you're doing some kind of task, such as using a cell phone to focus your attention on a conversation, you can't focus on monitoring your surroundings and drive safely. Therefore, there may be a problem in imposing the obligation to monitor the surroundings with the specification that makes it impossible to concentrate on monitoring the surroundings.

The reason why there is a problem in monitoring the surroundings of the level 2 automated driving is based on the results of an experiment using a driving simulator conducted in this laboratory. The degree of concentration of peripheral monitoring by most of the experiment collaborators is reduced by experiencing automated driving with only peripheral monitoring without operation control.

Not only does it decrease concentration, it also decreases alertness. In other words, if you keep driving without driving control, you'll feel sleepy.

Rather, they seem to be less drowsy when performing tasks other than perimeter monitoring. For example, not only monitoring the surroundings, but also watching a video on the navigation screen or a smartphone will not cause drowsiness. However, in this case, it is a so-called inattentive driving. It cannot be said that sufficient monitoring of surrounding areas is being conducted.

A driver monitor that analyzes the driver's face with a camera to determine whether the driver is concentrating on peripheral monitoring is considered. In the present image processing, it is possible to detect the driver's line of sight, the number of blinks as an index of sleepiness of the driver, and the degree of opening and closing of the eyelids, so that it is possible to determine whether the driver is concentrating on monitoring the surroundings.

If the driver monitor detects that the driver concentration has decreased, an warning is issued to warn the driver. If you doze off, it can be used as a doze warning.

In the automated driving of level 2, the driver tends to reduce the concentration of peripheral monitoring. Therefore, when the driver monitor is mounted to monitor the driver, the warning is always activated during the automated driving at level 2. Is this a good driver? Warning sounds are not usually pleasant sounds. The noise is intended to warn the driver, so unpleasant sounds should be more responsive to the driver. Therefore, the driver monitor warning sound may impair the comfort of the automated driving.

We wonder if we should go into the automated driving mode while being monitored by the driver monitor, or we should just stick to manual operation because we don't want to hear unpleasant warning sounds. At first you'll probably try the self-driving mode, however eventually you'll only be driving manually.

 

| | コメント (0)

2020年11月 4日 (水)

Take-over evaluation by driving simulator (1)

The practical application of the automated driving in proportion to the technology level is advanced. The following 6 levels proposed in SAE J3016 are de facto for this level of technology.

・Level 0 (manual operation) No self-driving element; driver controls all operations.
・Level 1 (driver assistance) Partial automation of operational control of drivers
・Level 2 (partial automated driving) Automates operational control under driver supervision.
・Level 3 (conditional automated driving) conditional autopilot, driver backs up.
・Level 4 (highly automated driving) automated driving is possible, however a driver is required and manual mode is available.
・Level 5 (fully automated driving) autonomous in all situations, no driver required.

So-called automated driving is a level 3 or higher that can run without driver monitoring and is called highly automated driving. The critical difference between Level 2 and Level 3 is that when an accident occurs while driving in the autonomous driving mode, the responsibility for the accident rests with the driver because Level 2 was under the driver's supervision, and with Level 3 there is no driver's supervision, so the responsibility for the accident rests with the manufacturer providing the automated driving.

There are five issues to be addressed in realizing highly automated driving: handling of insufficient sensor performance, take-over, mixing of manual and automated driving, responsibility for traffic accidents issues, and ethical issues. Among these issues, the driving simulator can be used for take-over and ethical issues.

Let's start with a take-over. Take-over means that the automated driving state is taken and the driver changes to manual operation. Take-over is possible up to automated driving level 4 because the steering and pedals are mounted. Let's take a look at the level of self-driving from a takeover perspective.

The automated driving level is classified by SAE in six stages from the conventional manual operation of level 0 to the full automated driving of level 5. When the level is changed, there are three kinds of items which have an effect: vehicle control, peripheral monitoring, and response in case of system troubles. In Level 1, vehicle control is performed by automated driving and driver in cooperation, in Level 2, vehicle control is performed by automated driving, in Level 3, peripheral monitoring is added, and in Level 4, response in case of system troubles is added. Although the functions of the levels 4 and 5 are the same, the level 5 does not require a driver, so that driving operational devices such as a steering wheel and pedals are not required.

Thus, take-over occurs at levels 2 and 3. However, even at level 4, since the driver can operate this vehicle, it is expected take-over is possible. Let's see how a take-over is done at each level of automated driving. Of course, level 0 is irrelevant, therefore from level 1.

Level 1 is the advanced driver assistance system itself, such as ACC. During follow-up driving, the driver can take-over freely at any time, and it is often left to the automated driving control when the situation becomes possible to support the operation from the manual operation.

At Level 2, all driving control can be left to the vehicle, increasing the opportunity for take-over. There is no particular problem in a situation where the driver actively takes over at his own will and enjoys driving beyond the driving control performed by the automated driving. The problem is that the automated driving requests the driver to take-over when the driver is not concentrating on monitoring the surroundings while the automated driving is controlling the operation. Automated driving requests for take-over when control is difficult. However, there is a concern that the driver might not be in a take-over situation.

 

| | コメント (0)

2020年11月 3日 (火)

ADAS evaluation by driving simulator (3)

Next is the lane departure warning system. It is a system to warn of the deviation from the lane during driving and also called LDWS. The LDWS is intended to be used on expressways because it detects white lines on both ends of the front driving lane with a in-vehicle camera. In case of driving in generic roads of city area, there are many white lines other than those indicating the edge of the road on the road surface, so that the vehicle is operated at, for example, 60 km/h or more in many cases.

The following items can be evaluated using the driving simulator of LDWS.

1)appropriateness of warning timing
2)Validity of the means of warning
3)validity of the overdetection rate
4)Validity of the undetected rate
5)Evaluation of product quality on curved roads (It often crosses the inner white line when curves.)

The warning timing validates the distance to the white line and how many seconds it takes to reach the white line. The warning means evaluates whether it is a sound, a display, or a haptic to vibrate the steering wheel. Overdetection and undetected are the same concept as the FVCWS. Since many drivers cut in the white line inside the curve on curved roads, the degree of cut-in should be evaluated as a countermeasure against overwarning.

Next is LKA (Lane Keep Assist). The LKA is not a system which only adding steering control to the LDWS as the automated braking is not a system which only adding brake control to the FVCWS. However, the hardware configuration of the LKA is a lane departure warning system plus steering control. Therefore, the in-vehicle camera that recognizes the driving lane is the same.

However, the LKA does not issue a lane departure warning when the LKA is in operation because the LKA does not deviate from the lane. Therefore, the main evaluation item using the driving simulator of LKA is whether the control condition on the curved road is accepted by the driver. Normally, the driver tends to move toward the inside of the lane when driving on a curve. Therefore, even if the vehicle is traveling while maintaining the center of the lane in a straight line, the driver may feel uncomfortable when maintaining the center of the lane in a curved road. Therefore, it is necessary to confirm how far the vehicle approaches the inside of the curve by the lane control during the curve driving.

The LKA prevents the driver from operating unless the driver is holding the steering wheel. Therefore, an alarm sound and an alarm method when the driver's hand is released from the steering wheel, and the HMI of how many seconds after the driver's hand is released from the steering wheel to release the control are also the evaluation items in the driving simulator.

FVCWS, automated braking, ACC, LDWS and LKA are the main advanced driver assistance systems. In addition, there are diagonal rear vehicle warning at the time of lane change, automated lane change, peripheral warning at the time of parking, automated parking, etc.

In any of the advanced driver assistance systems, the human factors of the HMI concerning the information provision method to the driver and the warning can be evaluated by the driving simulator, and the experiment can be conducted more safely and efficiently than the actual vehicle. Since the control feeling can be regarded as a kind of HMI, it can be evaluated by a driving simulator. However, for the control feeling, it is necessary to use a driving simulator that can reproduce G just like a real vehicle, and in the case of using a driving simulator that only has images without moving devices, it is necessary to carefully select evaluation items.

 

| | コメント (0)

2020年11月 2日 (月)

ADAS evaluation by driving simulator (2)

Next is automated braking. Automated braking is not simply a braking control of the FVCWS(front vehicle collision warning system). This is because the specification reflects the failure of the FVCWS, and therefore the warning specifications are different. This is also because the braking control is unexpected from the FVCWS.

In many cases, the FVCWS provides two types of warnings of the primary warning, the secondary warning depending on the TTC length. However, the primary warning was not commercially successful because many extra warnings were called "nuisance alarm". Therefore, many automated brakes do not have a primary warning. In addition, while some FVCWSs display vehicle-to-vehicle distance from the viewpoint of providing information, the information providing element of the automated brake is only the secondary warning. Moreover, the timing of the automated brake application is later than the secondary warning, and the method of applying the brake is also an intense sudden brake, which a general driver would not apply it for a lifetime.

Therefore, in the automated braking, the evaluation of the secondary warning timing is mainly performed on the driving simulator. The timing of automated braking is such that evaluation of the driver is unnecessary because of the limit of physically avoiding collision.

Next is ACC (adaptive cruise control). As a driving support system, it is the oldest commercialized system after the FVCWS, and the oldest system when considered as an improved version of cruise control. The ACC can also be equipped with a FVCWS by adding a required sensors and actuators to cruise control. However, like the automated braking, it does not have all the functions of the FVCWS, and it is often only the specifications required for the ACC.

As the evaluation items using the driving simulator of the ACC, it is considered that the deceleration after catching up with the forward traveling vehicle and the control feeling of whether the acceleration is good or bad when the forward traveling vehicle returns to the set speed in absence of the forward traveling vehicle are considered. However, since the control feeling is not suitable for the actual vehicle test, there is no need to use a driving simulator. The use of driving simulators is often used to evaluate the utility and safety assessment of ACC from the perspective of human factors. For example, the driving simulator evaluates the effect of reducing the load on the driving task by the ACC, and whether or not the driver behavior changes by getting used to the ACC.

Historically, the ACC has evolved from a throttle control system to a brake control system, from a speed of 40 km/h or more to a full speed range including stop and go. It goes without saying that driving simulators play an active role in determining the specifications for these advanced human factors.

 

| | コメント (0)

2020年11月 1日 (日)

ADAS evaluation by driving simulator (1)

 9月、10月と続けた内容の一部をまとめ直し英訳したものを2週間程度続ける。

Here, the advanced driver assistance systems that can be evaluated by the driving simulator and the functions that can be evaluated are described. The advanced driver assistance systems have the lowest automation rate at the lowest level of automated driving level 1, however they have far more specifications than level 2 or higher automated driving.

First, let's see the front vehicle collision warning system. It is the oldest system in the Advanced driver assistance systems. The basic function of this system is to measure the distance between the vehicle and the vehicle traveling ahead on the same lane, and to issue a warning to the driver when the TTC falls below a certain threshold. The driving simulator can be used for the following evaluation items.

1)appropriateness of warning timing
2)Validity of the means of warning
3)validity of the overdetection rate
4)Validity of the undetected rate
5)Evaluation of product quality on curved roads (During curves, guardrails are often detected and false warnings are issued.)

Overdetection means that something other than a vehicle traveling ahead is detected, and it is a type I error. Undetected means that the vehicle traveling ahead is not detected, which is a type II error.

Since the front vehicle collision warning system does not have vehicle control and only provides information to the driver, the evaluation items in the driving simulator are mainly related to HMI. Validity evaluation of warning timing is the basis of a warning system that evaluates the delay when the system issues the warning and the response time of the driver.

The validity of the warning means is to evaluate whether the warning is given by sound or display, or by haptic vibration of the steering wheel, and the contents of each means. For example, when the warning is given by sound, the tone color, volume, output method, and the like are evaluated, and a sound of 4 KHz is determined to be an intermittent sound of 80 db at 2 Hz.

There are various external sensors such as LiDAR, radio wave radar, camera, etc., for detecting the vehicle traveling ahead by the front vehicle collision warning system. It is difficult to detect any vehicles 100% correctly. Since the characteristics of the sensor include over-detection (type I error) and non-detection (type II error), it is necessary to evaluate whether each is acceptable to the driver. Depending on the type of sensor, the engineer knows under what circumstances the over-detection or non-detection is likely to occur, so that it is possible to reproduce the situation of the erroneous detection by the driving simulator, so that the evaluation by the driving simulator becomes possible.

Depending on the driving scene, a false warning which cannot be classified as overdetected or undetected may occur. A typical case is a case where a guardrails enter a detection range of a sensor and erroneously recognizes it as a preceding vehicle when traveling on a curved road with a small curvature. The degree to which these false warnings are acceptable to the driver can also be evaluated by the driving simulator.

| | コメント (0)

« 2020年10月 | トップページ | 2020年12月 »