Monday 28 December 2015

Robots learn to complete tasks by watching ‘how-to’ videos

YouTube offers 1,80,000 videos on “How to make an omelette” and 2,81,000 on “How to tie a bowtie.” Photo used for representative purposes only.

YouTube offers 1,80,000 videos on “How to make an omelette” and 2,81,000 on

“How to tie a bowtie.” Photo used for representative purposes only.
The researchers at the Cornell University in New York call their project "RoboWatch."

Scientists are teaching robots to watch how-to videos and derive a series of step-by-step instructions
to perform a task, an advance that may help future ‘personal robots’ to do everyday housework
such as cooking and washing dishes.

The researchers at the Cornell University in New York call their project “RoboWatch.”

There is a common underlying structure to most how—to videos and there is plenty of
source material available, researchers said.

YouTube offers 1,80,000 videos on “How to make an omelette” and 2,81,000 on “How to tie a bowtie.”

By scanning multiple videos on the same task, a computer can find what they all have in
common and reduce that to simple step-by-step instructions in natural language.

People post all these videos “to help people or maybe just to show off,” said graduate student Ozan Sener, lead author of a paper on the method presented at the International Conference on
Computer Vision in Chile.

The work is aimed at a future when we may have “personal robots” to perform everyday
housework — cooking, washing dishes, doing the laundry, feeding the cat — as well as to
assist the elderly and people with disabilities, researchers said. A key feature of the system
is that it is “unsupervised,” said Sener who collaborated with colleagues at Stanford University.

In most previous work, robot learning is accomplished by having a human explain
what the robot is observing.

Saturday 26 December 2015

Microsoft’s virtual social assistant XiaoIce gets a job doing weather news

Saturday 12 December 2015

Surena: life-sized humanoid robot that can run, play football and speak Farsi

New Humanoid Robot Surena III

Image: University of Tehran/CAST

Iranian researchers at the University of Tehran unveiled yesterday the latest generation of their humanoid robot, named Surena III. In a demonstration, the adult-sized robot walked across a stage, imitated a person’s arm gestures, and stood on one foot while bending backwards.
Dr. Aghil Yousefi-Koma, a professor of mechanical engineering who leads the Surena project, tells IEEE Spectrum that the robot is designed as a research platform to explore bipedal locomotion, human-robot interaction, and other challenges in robotics. He also hopes Surena can help show the importance of engineering careers to students and the public, adding that he views the robot as a symbol of technology advancement “in the direction of peace and humanity.”
With a sleek plastic casing and bright LED eyes, Surena III is 1.9 meters (6 feet 3 inches) tall and weighs in at 98 kilograms (216 lbs). It’s equipped with a host of sensors, including a Kinect-based 3D vision module, and its joints are powered by 31 servomotors. The control software running on the robot and a monitoring system used by human operators to supervise its functions are based on the popular Robot Operating System, or ROS.
Surena has yet to demonstrate that level of mobility and dexterity, but the Iranian humanoid has been making steady progress in the past seven years. The first version of the robot, unveiled in 2008, had only 8 degrees of freedom (DOF). Surena 2, announced in 2010, had 22 DOF and could walk at a pace of 0.03 meters per second. Now the third generation of the robot has 31 DOF and a walking speed nearly 10 times as fast, at 0.2 m/s.
Dr. Yousefi-Koma, who heads University of Tehran’s Center for Advanced Systems and Technologies (CAST), where Surena was developed, says he followed the DRC events and, although Surena was not developed to participate in that competition, “one of the best applications for this robot may be employing it in disasters.”
He says Surena III, funded by the Industrial Development and Renovation Organization of Iran, is currently able to walk up and down stairs and ramps, adapt to irregularities on the ground, grasp objects, and also kick a soccer ball. He sent us footage showing some of those capabilities:


To build Surena III, the Iranian researchers significantly upgraded the robot’s sensors and actuators over the previous version. The vision system now allows the robot to detect faces and objects and track a person’s motions. A speech system can recognize some predefined sentences in Persian. Encoders embedded on all joints, six-axis force/torque sensors on the ankles, and an IMU on the torso help the robot remain stable. To power Surena’s hips and legs, the researchers used a combination of Maxon brushless dc motors, harmonic drives, and timing belt-pulley systems. The upper body usesROBOTIS Dynamixel AX and MX servos.
The group also completely revamped the software system. It’s now based onROS, and Dr. Yousefi-Koma says it “enables the robot to simultaneously communicate with the environment, manage its behaviors, monitor its sensors, and detect unwanted faults in the system.” A supervisory system with a graphical interface allows the researchers to monitor all joints and sensors, and a SDK with integrated C++ libraries allows them to more easily create and test new behaviors for the robot.
About 70 students, engineers, and professors from Tehran University and five other Iranian institutions helped design and build Surena III. Local companies developing robotics software and speech systems also contributed to the project, and Dr. Yousefi-Koma expects that some of the technology developed for the humanoid could find applications in manufacturing, healthcare, and other industries.  
He says his group will focus now on the robot’s interactions with humans, and they want to make it more autonomous as well. They’re also preparing papers about the project, hoping to present them at IEEE conferences in the near future. And they are already making plans for Surena IV.

Saturday 15 August 2015

Windows 10

Introducing Windows 10

Windows 10 Launch - July 29

Windows 10 Preview
 

Friday 3 April 2015

I’m seeking professionals who might be interested in becoming a small business owner in USA

I’m seeking professionals who might be interested in becoming a small business owner by taking over these books.  But I’m also looking for professionals who might be interested in starting a scratch agency!

Wednesday 4 March 2015

Apple iPhone 6S to add Force Touch for new depth of control?

Apple is expected to add the Force Touch controls found on the Apple Watch to its next iPhone 6S.
The rumour has started amid the Mobile World Congress announcements from other manufacturers. It suggests that Apple, unfaltering, is powering ahead with its next incremental smartphone upgrade.
Apple iPhone 6S to add Force Touch for new depth of control?
Of course this isn't fact but comes from sources of Apple Insider who claim there will be two devices once again. The codenames are N71 for the 4.7-inch model and N66 for the 5.5-inch version. Both are expected to feature Force Touch.
Force Touch was announced at the Apple Watch launch event. It gives the device the ability to recognise the difference between a normal touch and a press. This should mean the depth of controls from a single finger touch can increase meaning even less tapping and swiping, theoretically.
Apple has called Force Touch its "most significant new sensing capability since Multi-Touch."
According to another source Apple planned to add Force Touch to the iPhone 6 but it suffered calibration issues. The way it works on the Watch is by measuring variations in the flex of the screen – suggesting the iPhone 6 may have a more flexible screen than the current model.
Other rumours of a dual camera system have been kicked back as this is expected to be a more incremental upgrade